Showing 33 open source projects for "web proxy"

View related business solutions
  • Earn up to 15% annual interest with Nexo. Icon
    Earn up to 15% annual interest with Nexo.

    Let your crypto work for you

    Put idle assets to work with competitive interest rates, borrow without selling, and trade with precision. All in one platform. Geographic restrictions, eligibility, and terms apply.
    Get started with Nexo.
  • Earn up to 15% annual interest with Nexo. Icon
    Earn up to 15% annual interest with Nexo.

    Access competitive interest rates on your digital assets.

    Generate interest, borrow against your crypto, and trade a range of cryptocurrencies — all in one platform. Geographic restrictions, eligibility, and terms apply.
    Get started with Nexo.
  • 1
    XX-Net

    XX-Net

    A web proxy tool

    XX-Net is an easy-to-use, anti-censorship web proxy tool from China. It includes GAE_proxy and X-Tunnel, with support for multiple platforms.
    Downloads: 63 This Week
    Last Update:
    See Project
  • 2
    mitmproxy

    mitmproxy

    A free and open source interactive HTTPS proxy

    mitmproxy is an open source, interactive SSL/TLS-capable intercepting HTTP proxy, with a console interface fit for HTTP/1, HTTP/2, and WebSockets. It's the ideal tool for penetration testers and software developers, able to debug, test, and make privacy measurements. It can intercept, inspect, modify and replay web traffic, and can even prettify and decode a variety of message types. Its web-based interface mitmweb gives you a similar experience as Chrome's DevTools, with the addition of features like request interception and replay. ...
    Downloads: 14 This Week
    Last Update:
    See Project
  • 3
    spider_collection

    spider_collection

    Collection of Python web scraping scripts for data extraction tasks

    spider_collection is a collection of Python web crawler scripts created primarily for experimentation, learning, and practical scraping tasks. spider_collection gathers multiple independent spiders designed to collect data from different platforms and services, demonstrating a variety of scraping techniques and workflows. These crawlers make use of common Python scraping tools such as requests, parsel, BeautifulSoup, and the Scrapy framework to extract structured information from web pages. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 4
    tumblr-crawler

    tumblr-crawler

    Python crawler to download photos and videos from Tumblr blogs

    tumblr-crawler is an open source Python-based utility designed to download media content from Tumblr blogs. It provides a script that automatically retrieves photos and videos from specified Tumblr sites and saves them locally for offline access. Users can specify one or multiple blogs to crawl by editing a configuration file or by passing parameters through the command line. Once executed, the script fetches media from the Tumblr API and stores the downloaded files in folders named after...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Data management solutions for confident marketing Icon
    Data management solutions for confident marketing

    For companies wanting a complete Data Management solution that is native to Salesforce

    Verify, deduplicate, manipulate, and assign records automatically to keep your CRM data accurate, complete, and ready for business.
    Learn More
  • 5
    rnet

    rnet

    Python HTTP client with TLS and HTTP/2 fingerprint emulation support

    rnet is an ergonomic and modular Python HTTP client designed for developers who need advanced control over network requests and protocol behavior. It provides a flexible API for making HTTP requests while supporting both asynchronous and blocking workflows, allowing it to integrate easily into different Python applications and runtimes. rnet focuses on low-level protocol customization, giving users fine-grained control over TLS and HTTP/2 configuration in order to emulate specific browser...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 6
    autocrawler

    autocrawler

    Multiprocess Selenium crawler for downloading images by keywords

    AutoCrawler is a Python-based image crawling tool designed to automatically download large numbers of images from search engines using automated browser interaction. It uses Selenium and a Chrome browser driver to navigate image search pages and collect image sources based on keywords provided by the user. AutoCrawler supports multiprocess and multithreaded downloading, which allows it to retrieve images faster by running several tasks simultaneously. Users provide search terms through a...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    Grab Framework Project

    Grab Framework Project

    Web Scraping Framework

    Grab is a python framework for building web scrapers. With Grab you can build web scrapers of various complexity, from simple 5-line scripts to complex asynchronous website crawlers processing millions of web pages. Grab provides an API for performing network requests and for handling the received content e.g. interacting with DOM tree of the HTML document. The single request/response API that allows you to build network request, perform it and work with the received content. The API is...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Scweet

    Scweet

    Scrape tweets, profiles, followers and following from Twitter/X

    Scweet is a Python-based Twitter/X scraping library and CLI designed to collect tweets, profile timelines, followers, following lists, and user profile data without requiring the official Twitter/X API or a developer account. Instead of depending on deprecated unauthenticated scraping methods, it works by using X’s web GraphQL API together with authenticated browser cookies, which gives it a more current and practical approach for data extraction. The project supports a broad set of...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    Scrapling

    Scrapling

    An adaptive Web Scraping framework

    Scrapling is an adaptive web scraping framework designed to handle everything from a single HTTP request to large-scale, concurrent crawls. Built for modern websites, it intelligently adapts to structural changes by automatically relocating elements when page layouts update. The framework includes advanced fetchers capable of bypassing anti-bot protections such as Cloudflare Turnstile using stealth and browser automation techniques. Its powerful spider system supports multi-session crawling,...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Outbound sales software Icon
    Outbound sales software

    Unified cloud-based platform for dialing, emailing, appointment scheduling, lead management and much more.

    Adversus is an outbound dialing solution that helps you streamline your call strategies, automate manual processes, and provide valuable insights to improve your outbound workflows and efficiency.
    Learn More
  • 10

    http-proxy-tunnel

    Create nested tunnels through HTTP proxies

    Http-proxy-tunnel creates TCP tunnels through http proxies that permit the CONNECT method. It differs from other proxy tunnelling programs in that it can tunnel through multiple proxies, and can use SSL tunnels. These abilities mean that in combination with a web server that can proxy (such as Apache) you can serve normal web pages from ports 80 and 443 and connect to the server (using ssh say) via those ports at the same time.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 11
    CacheGuard Gateway

    CacheGuard Gateway

    CacheGuard Gateway is a UTM, a WAF, and a QoS management appliance.

    ...Download CacheGuard-OS and install on bare-metal or a virtual machine. In minutes, you get a complete security gateway protecting your network at no cost. Includes firewall, web antivirus, VPN, URL filtering, and SSL-inspecting web proxy in one UTM stack. A built-in Web Application Firewall (WAF) works with reverse proxy, load balancer and SSL offloader to block malicious requests and low-reputation IP traffic. Quality of Service (QoS) prioritises critical traffic, balances multiple WAN links and caches web content to optimise performance.
    Leader badge
    Downloads: 204 This Week
    Last Update:
    See Project
  • 12
    ddgr

    ddgr

    DuckDuckGo from the terminal

    ddgr is a cmdline utility to search DuckDuckGo from the terminal. While googler is highly popular among cmdline users, in many forums the need of a similar utility for privacy-aware DuckDuckGo came up. DuckDuckGo Bangs are super-cool too! So here's ddgr for you! Unlike the web interface, you can specify the number of search results you would like to see per page. It's more convenient than skimming through 30-odd search results per page. The default interface is carefully designed to use...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 13
    pspider

    pspider

    Simple Python framework for building multithreaded web crawlers

    PSpider is a lightweight web crawling framework written in Python designed to simplify the development of custom web spiders. It focuses on providing an easy-to-understand architecture while still supporting concurrent crawling for improved performance. It uses a multithreaded model that separates the crawling workflow into several components responsible for fetching, parsing, and saving data. Tasks are managed through queues, allowing different parts of the crawler to process work...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    Scylla

    Scylla

    Intelligent proxy pool for collecting and managing public proxies

    Scylla is an open source proxy pool system designed to collect, validate, and manage large numbers of public proxy servers for use in web scraping and data extraction workflows. It automatically crawls the internet to discover proxy IP addresses and evaluates their availability and reliability before adding them to a usable pool. It includes a JSON API that allows developers and applications to retrieve proxy information programmatically, making it easier to integrate proxy rotation into scraping tools or automation scripts. ...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 15
    googler

    googler

    Google Search, Google Site Search, Google News from the terminal

    googler is a power tool to Google (Web & News) and Google Site Search from the command-line. It shows the title, URL and abstract for each result, which can be directly opened in a browser from the terminal. Results are fetched in pages (with page navigation). Supports sequential searches in a single googler instance. googler was initially written to cater to headless servers without X. You can integrate it with a text-based browser. However, it has grown into a very handy and flexible...
    Downloads: 9 This Week
    Last Update:
    See Project
  • 16
    GoogleScraper

    GoogleScraper

    Python tool for scraping search engine results from many providers

    GoogleScraper is a Python-based tool designed to automatically collect and process search engine results from multiple providers. It enables developers and researchers to programmatically query search engines and extract useful information such as links, titles, and result descriptions. GoogleScraper supports several major search engines and can be used to gather structured datasets from search result pages for further analysis. It provides two different scraping approaches: sending direct...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    ProxyBroker

    ProxyBroker

    Asynchronous tool for finding and checking public proxy servers

    ProxyBroker is an open source Python tool designed to automatically discover and verify public proxy servers from many online sources. It operates asynchronously, allowing it to gather and test large numbers of proxies efficiently while performing multiple checks concurrently. It collects proxy addresses from dozens of providers and evaluates whether they are functional and suitable for use. It supports several proxy protocols, including HTTP, HTTPS, SOCKS4, and SOCKS5, making it flexible...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    Jupyter Server Proxy

    Jupyter Server Proxy

    Jupyter notebook server extension to proxy web services.

    Jupyter Server Proxy lets you run arbitrary external processes (such as RStudio, Shiny Server, Syncthing, PostgreSQL, Code Server, etc) alongside your notebook server and provide authenticated web access to them using a path like /rstudio next to others like /lab. Alongside the Python package that provides the main functionality, the JupyterLab extension (@jupyterhub/jupyter-server-proxy) provides buttons in the JupyterLab launcher window to get to RStudio for example.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    haipproxy

    haipproxy

    Distributed proxy IP pool for web crawlers using Scrapy and Redis

    HAipproxy is a distributed proxy IP pool system designed to collect, manage, and provide large numbers of proxy addresses for web crawling tasks. It automatically crawls proxy resources from the internet and aggregates them into a centralized pool that can be accessed by distributed spiders and scraping systems. It is built using Python and relies on Scrapy for high-performance crawling while Redis is used for data storage, communication, and task coordination between components. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    gain

    gain

    Asyncio-based Python framework for building fast web crawling spiders

    Gain is a Python web crawling framework designed to simplify the process of building efficient and scalable web scrapers. It is built on top of asynchronous technologies such as asyncio, aiohttp, and uvloop to support high-performance crawling with concurrent network requests. It provides a structured framework for creating spiders that can navigate websites, extract structured data, and process the collected results.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    HTTP Replicator is a general purpose caching proxy server written in python. It reduces bandwidth by merging concurrent downloads and building a local 'replicated' file hierarchy, similar to wget -r. The cache is also accessible through a web interface
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Atomschlag

    Atomschlag

    A lightweight Webkit browser written entirely in Python

    Atomschlag is a project of writing a Webkit-based browser using PyGTK and PyWebkitGTK, completely in Python, to create a useable, secure and lightweight replacement of existing browsers in custom appliances. The primary project goals are: - small size; - minimal abilities to track you down based on the client info; - maximal compatibility with proxy-based anonymity layers such as I2P; - URL filtering for blocking ads and user tracking services; - simple and non-overloaded user interface.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23

    LinkChecker

    check links in web documents or full websites

    New Homepage: http://wummel.github.io/linkchecker/ Linkchecker features: - recursive and multithreaded checking and site crawling - output in colored or normal text, HTML, SQL, CSV, XML or a sitemap graph in different formats - HTTP/1.1, HTTPS, FTP, mailto:, news:, nntp:, Telnet and local file links support - restrict link checking with regular expression filters for URLs - proxy support -...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 24

    Spondulas

    Spondulas is browser emulator designed to retrieve web pages for hunti

    Spondulas is browser emulator and parser designed to retrieve web pages for hunting malware. It supports generation of browser user agents, GET/POST requests, and SOCKS5 proxy. It can be used to parse HTML files sent via e-mail. Monitor mode allows a website to be monitored at intervals to discover changes in DNS or content over time. Autolog mode creates an investigation file that documents redirection chains.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    HTTP proxy via e-mail. Mailwebproxy reads URL from mail subject and read PUT data from mailbody. Mailwebproxy accesses web server and makes mail contains web pages. So, you can read web pages using mail interface. This is usefull for celler phone us
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next
MongoDB Logo MongoDB