爬取贴吧数据需要使用Python的第三方库,如requests和BeautifulSoup,以下是一个简单的示例:
1、安装所需库
pip install requests pip install beautifulsoup4
2、导入所需库
import requests from bs4 import BeautifulSoup
3、定义爬取函数
def get_tieba_data(url): headers = { 'UserAgent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'} response = requests.get(url, headers=headers) soup = BeautifulSoup(response.text, 'html.parser') return soup
4、解析网页数据
def parse_tieba_data(soup): data = [] for item in soup.find_all('li', class_=' j_thread_list clearfix'): title = item.find('a', class_='j_th_tit').get_text() link = item.find('a', class_='j_th_tit')['href'] data.append({'title': title, 'link': link}) return data
5、主函数
def main(): url = 'https://tieba.baidu.com/f?kw=Python&ie=utf8' soup = get_tieba_data(url) data = parse_tieba_data(soup) for item in data: print(item) if __name__ == '__main__': main()
这个示例仅用于学习目的,实际使用时请遵守相关法律法规,尊重网站版权。