Skip to main content

Python crawlers to crawl pure girl pictures

Before running the code, you need to install BeautifulSoup, requests, os library.

From bs4 import BeautifulSoup
Import requests
Import time
Import os

Def get_html(url):
    Try:
        Response=requests.get(url)
        Response.encoding='gb2312'
        If response.status_code==200:
            Print('Successfully connected! URL is '+url)
            Return response.text
    Except requests.RequestException:
       Return None

Def get_url_and_name(url):
    "The passed argument is the main page link, the return value is a list with 2 elements, element 1 is the map package link, and element 2 is the map package name."
    Html=get_html(url)
    Soup=BeautifulSoup(html,'lxml')
    Name=[]
    Url_1=[]
    List2=soup.find_all(class_='t')
    Sign=1
    For item in list2:
        If(sign!=1 and sign!=42):
            Url_temp=item.find('a').get('href')
            Name_temp=item.find(class_='title').find('a').get('title')
            Url_1.append(url_temp)
            Name.append(name_temp)
        Sign=sign+1
    Temp=[url_1,name]
    Return temp

Def get_pic_url(url):
    "The argument passed in is the link to the map package, and the return value is the link to all the images in the map package."
    Address=[]
    Html1=get_html(url)
    Soup=BeautifulSoup(html1,'lxml')
    List4=soup.find(class_='page').find_all('a')
    Temp=1
    While(temp<len(list4)):
        If(temp==1):
            Url_3=url
        Else:
            Url_3=url.replace('.html','_'+str(temp)+'.html')
        Temp=temp+1
        Html2=get_html(url_3)
        Soup1=BeautifulSoup(html2,'lxml')
        List3=soup1.find(class_='content').find_all('img')
        For item in list3:
            Address.append(item.get('src'))
    Return address
    
Def pic_download(url,name,path):
    "url is a list of all image links for a map package, name is the name of the package, and path is the downloaded directory."
    Os.mkdir(path+'./'+name)
    # Because the mkdir function is used, it is necessary to ensure that the folder to be created cannot exist, otherwise an error will be reported.
    Print('The package being downloaded is named '+name')
    Index=1
    For i1 in url:
        Filename = path+'./'+name+'./'+str(index) +'.jpg'
        With open(filename, 'wb') as f:
            Img = requests.get(i1).content
            F.write(img)
        Index += 1
        Time.sleep(2)
    Print(name+'download completed!')

Def main(i):
#i is the number of pages of the homepage of the map (the first few pages)
    Url='https://www.keke234.com/gaoqing/list_5_'+str(i)+'.html'
    Path=r'H:\autoDownLoadPictures\savePicture'
#path is a custom path
    Information=get_url_and_name(url)
    Num=0
    For item in information[0]:
        Address=get_pic_url(item)
        Pic_download(address,information[1][num],path)
        Num=num+1

If __name__ == '__main__':
    For i in range(1,2):
        Main(i)

Comments

Popular posts from this blog

span[class~="sr-only"]

  The  span[class~="sr-only"]  selector will select any  span  element whose  class   includes   sr-only . Create that selector, and give it a  border  property set to  0 . span [ class ~= "sr-only" ] {    border:   0 ; }

An Australian Pelican feast that lasted more than two decades

Why are you so focused? It turned out that the pelicans were all waiting to eat fish with their heads up, hahahaha! In the Central Coast area north of Sydney, there is a beautiful and famous town called The Entrance, which has the title of "Australian Pelican Capital". What makes a town so honored? The reason is these cute toucans. Every afternoon, the pelicans fly here from near and far, and there are no obstacles 365 days a year. As soon as 3:30, a staff member will push a large box full of fish to the small square where the pelicans gather, and the pelicans have long been eager to wait. This white-haired grandpa came to feed today. I saw the grandfather skillfully put on rubber gloves, while taking a fish out of the box and throwing it at the pelican, he interacted with the onlookers and introduced the knowledge of the pelican. The noise of the pelicans competing for the fish and the exclamation of the onlookers crowded into one, the atmosphere was warm. A clever pelican s...

正则表达式匹配空格\s和特定次数

  let   ohStr  =  "Ohhh no" ; let   ohRegex  =  /Oh{3,6}\sno/ ig ;  let   result  =  ohRegex . test ( ohStr ); {3,6}表示匹配3到6次,包含3,6. {3, )表示最少3次,无上限 { ,8}表示最多8次,无下限 {3}匹配特定次数,这里表示只匹配3次的。 后面跟一个?,表示这个字母可能会出现,也可能不出现。如/colou?r/既能匹配英式英语的colour,也能匹配美式英语的color 在 pwRegex 中使用前瞻来匹配长度大于 5 个字符且具有两个连续数字的密码。 let   sampleWord  =  "astronaut" ; let   pwRegex  =  /(?=\w{6})(?=\w*\d{2})/ gi ;  let   result  =  pwRegex . test ( sampleWord );