如何使用Python从HTML获取href链接?

时间:2022-05-30 07:32:48
import urllib2

website = "WEBSITE"
openwebsite = urllib2.urlopen(website)
html = getwebsite.read()

print html

So far so good.

到现在为止还挺好。

But I want only href links from the plain text HTML. How can I solve this problem?

但我只希望纯文本HTML中的href链接。我怎么解决这个问题?

6 个解决方案

#1


71  

Try with Beautifulsoup:

尝试使用Beautifulsoup:

from BeautifulSoup import BeautifulSoup
import urllib2
import re

html_page = urllib2.urlopen("http://www.yourwebsite.com")
soup = BeautifulSoup(html_page)
for link in soup.findAll('a'):
    print link.get('href')

In case you just want links starting with http://, you should use:

如果您只想要以http://开头的链接,您应该使用:

soup.findAll('a', attrs={'href': re.compile("^http://")})

#2


26  

You can use the HTMLParser module.

您可以使用HTMLParser模块。

The code would probably look something like this:

代码可能看起来像这样:

from HTMLParser import HTMLParser

class MyHTMLParser(HTMLParser):

    def handle_starttag(self, tag, attrs):
        # Only parse the 'anchor' tag.
        if tag == "a":
           # Check the list of defined attributes.
           for name, value in attrs:
               # If href is defined, print it.
               if name == "href":
                   print name, "=", value


parser = MyHTMLParser()
parser.feed(your_html_string)

Note: The HTMLParser module has been renamed to html.parser in Python 3.0. The 2to3 tool will automatically adapt imports when converting your sources to 3.0.

注意:HTMLParser模块已在Python 3.0中重命名为html.parser。将源转换为3.0时,2to3工具将自动调整导入。

#3


10  

Look at using the beautiful soup html parsing library.

看看使用漂亮的汤html解析库。

http://www.crummy.com/software/BeautifulSoup/

http://www.crummy.com/software/BeautifulSoup/

You will do something like this:

你会做这样的事情:

import BeautifulSoup
soup = BeautifulSoup.BeautifulSoup(html)
for link in soup.findAll("a"):
    print link.get("href")

#4


6  

My answer probably sucks compared to the real gurus out there, but using some simple math, string slicing, find and urllib, this little script will create a list containing link elements. I test google and my output seems right. Hope it helps!

与真正的大师相比,我的答案可能很糟糕,但是使用一些简单的数学,字符串切片,查找和urllib,这个小脚本将创建一个包含链接元素的列表。我测试谷歌和我的输出似乎是正确的。希望能帮助到你!

import urllib
test = urllib.urlopen("http://www.google.com").read()
sane = 0
needlestack = []
while sane == 0:
  curpos = test.find("href")
  if curpos >= 0:
    testlen = len(test)
    test = test[curpos:testlen]
    curpos = test.find('"')
    testlen = len(test)
    test = test[curpos+1:testlen]
    curpos = test.find('"')
    needle = test[0:curpos]
    if needle.startswith("http" or "www"):
        needlestack.append(needle)
  else:
    sane = 1
for item in needlestack:
  print item

#5


2  

Here's a lazy version of @stephen's answer

这是@ stephen的答案的懒惰版本

from urllib.request import urlopen
from itertools import chain
from html.parser import HTMLParser

class LinkParser(HTMLParser):
    def reset(self):
        HTMLParser.reset(self)
        self.links = iter([])

    def handle_starttag(self, tag, attrs):
        if tag == 'a':
            for name, value in attrs:
                if name == 'href':
                    self.links = chain(self.links, [value])


def gen_links(f, parser):
    encoding = f.headers.get_content_charset() or 'UTF-8'

    for line in f:
        parser.feed(line.decode(encoding))
        yield from parser.links

Use it like so:

像这样使用它:

>>> parser = LinkParser()
>>> f = urlopen('http://*.com/questions/3075550')
>>> links = gen_links(f, parser)
>>> next(links)
'//*.com'

#6


2  

Using BS4 for this specific task seems overkill.

使用BS4执行此特定任务似乎有点过分。

Try instead:

尝试改为:

website = urllib2.urlopen('http://10.123.123.5/foo_images/Repo/')
html = website.read()
files = re.findall('href="(.*tgz|.*tar.gz)"', html)
print sorted(x for x in (files))

I found this nifty piece of code on http://www.pythonforbeginners.com/code/regular-expression-re-findall and works for me quite well.

我在http://www.pythonforbeginners.com/code/regular-expression-re-findall上找到了这段漂亮的代码,对我来说非常有用。

I tested it only on my scenario of extracting a list of files from a web folder that exposes the files\folder in it, e.g.:

我仅在我从一个公开文件夹中提取文件列表的情况下测试它,该文件夹中有文件\文件夹,例如:

如何使用Python从HTML获取href链接?

and I got a sorted list of the files\folders under the URL

我在URL下面有一个文件\文件夹的排序列表

#1


71  

Try with Beautifulsoup:

尝试使用Beautifulsoup:

from BeautifulSoup import BeautifulSoup
import urllib2
import re

html_page = urllib2.urlopen("http://www.yourwebsite.com")
soup = BeautifulSoup(html_page)
for link in soup.findAll('a'):
    print link.get('href')

In case you just want links starting with http://, you should use:

如果您只想要以http://开头的链接,您应该使用:

soup.findAll('a', attrs={'href': re.compile("^http://")})

#2


26  

You can use the HTMLParser module.

您可以使用HTMLParser模块。

The code would probably look something like this:

代码可能看起来像这样:

from HTMLParser import HTMLParser

class MyHTMLParser(HTMLParser):

    def handle_starttag(self, tag, attrs):
        # Only parse the 'anchor' tag.
        if tag == "a":
           # Check the list of defined attributes.
           for name, value in attrs:
               # If href is defined, print it.
               if name == "href":
                   print name, "=", value


parser = MyHTMLParser()
parser.feed(your_html_string)

Note: The HTMLParser module has been renamed to html.parser in Python 3.0. The 2to3 tool will automatically adapt imports when converting your sources to 3.0.

注意:HTMLParser模块已在Python 3.0中重命名为html.parser。将源转换为3.0时,2to3工具将自动调整导入。

#3


10  

Look at using the beautiful soup html parsing library.

看看使用漂亮的汤html解析库。

http://www.crummy.com/software/BeautifulSoup/

http://www.crummy.com/software/BeautifulSoup/

You will do something like this:

你会做这样的事情:

import BeautifulSoup
soup = BeautifulSoup.BeautifulSoup(html)
for link in soup.findAll("a"):
    print link.get("href")

#4


6  

My answer probably sucks compared to the real gurus out there, but using some simple math, string slicing, find and urllib, this little script will create a list containing link elements. I test google and my output seems right. Hope it helps!

与真正的大师相比,我的答案可能很糟糕,但是使用一些简单的数学,字符串切片,查找和urllib,这个小脚本将创建一个包含链接元素的列表。我测试谷歌和我的输出似乎是正确的。希望能帮助到你!

import urllib
test = urllib.urlopen("http://www.google.com").read()
sane = 0
needlestack = []
while sane == 0:
  curpos = test.find("href")
  if curpos >= 0:
    testlen = len(test)
    test = test[curpos:testlen]
    curpos = test.find('"')
    testlen = len(test)
    test = test[curpos+1:testlen]
    curpos = test.find('"')
    needle = test[0:curpos]
    if needle.startswith("http" or "www"):
        needlestack.append(needle)
  else:
    sane = 1
for item in needlestack:
  print item

#5


2  

Here's a lazy version of @stephen's answer

这是@ stephen的答案的懒惰版本

from urllib.request import urlopen
from itertools import chain
from html.parser import HTMLParser

class LinkParser(HTMLParser):
    def reset(self):
        HTMLParser.reset(self)
        self.links = iter([])

    def handle_starttag(self, tag, attrs):
        if tag == 'a':
            for name, value in attrs:
                if name == 'href':
                    self.links = chain(self.links, [value])


def gen_links(f, parser):
    encoding = f.headers.get_content_charset() or 'UTF-8'

    for line in f:
        parser.feed(line.decode(encoding))
        yield from parser.links

Use it like so:

像这样使用它:

>>> parser = LinkParser()
>>> f = urlopen('http://*.com/questions/3075550')
>>> links = gen_links(f, parser)
>>> next(links)
'//*.com'

#6


2  

Using BS4 for this specific task seems overkill.

使用BS4执行此特定任务似乎有点过分。

Try instead:

尝试改为:

website = urllib2.urlopen('http://10.123.123.5/foo_images/Repo/')
html = website.read()
files = re.findall('href="(.*tgz|.*tar.gz)"', html)
print sorted(x for x in (files))

I found this nifty piece of code on http://www.pythonforbeginners.com/code/regular-expression-re-findall and works for me quite well.

我在http://www.pythonforbeginners.com/code/regular-expression-re-findall上找到了这段漂亮的代码,对我来说非常有用。

I tested it only on my scenario of extracting a list of files from a web folder that exposes the files\folder in it, e.g.:

我仅在我从一个公开文件夹中提取文件列表的情况下测试它,该文件夹中有文件\文件夹,例如:

如何使用Python从HTML获取href链接?

and I got a sorted list of the files\folders under the URL

我在URL下面有一个文件\文件夹的排序列表