We are a globally distributed team of over 190 Zytans working from over 28 countries. We are on a mission to enable our customers to extract the data they need to continue to innovate and grow their businesses.
We are the Creators and lead maintainer of Scrapy, an open source web extraction framework used by over 1M developers.
Open roles:
Senior Backend Engineer (Python, Scala, Java) - design and implement distributed systems: large-scale web crawling platform, integrating Deep Learning based web data extraction components, working on queue algorithms, large datasets, creating a development platform for other company departments.
Senior Backend Engineer (Python)- work on our customer facing application, tools and APIs to make it easier for data analysts and machine learning engineers to focus on finding insights while we handle their web data needs.
Internal Systems Python Developer - You will be responsible for developing and maintaining our systems which comprise mainly cloud based applications with custom integrations between them
Browser Engineer - Work as a member of the content fetching team to implement features in the custom browser, support downstream teams and developers by creating tools for better debugging and introspection.
Principal Reverse Engineer - use reverse engineering, static, dynamic or concolic analysis in conjunction with Zyte's best in class tools including Zyte's Smart Proxy Manager.
Developer Advocate - create articles, videos, tutorials and other types of content for Zyte as well as third-party sites and blogs. You’ll build long-lasting relationships with members of the community.
About Scrapinghub: We are the Creators and lead maintainer of Scrapy, an open source web extraction framework used by over 1M developers. As we’ve grown through the years, our amazing team have worked to bring new products to life such as:
Crawlera: A specially designed proxy for web scraping to ensure you can crawl quickly and reliably.
Splash: A headless browser to enable customers to extract data from JavaScript websites.
AutoExtract: delivers next-generation web scraping capabilities backed by an AI-enabled data extraction engine.
We are a remote first company with a team distributed across 30 countries. We have a very engineering-driven culture (two engineer-founders) and a great place to work if you're self-directed, curious, and interested in working in open source environments. More on Open Source at Scrapinghub: http://scrapinghub.com/opensource/.
We develop a wide range products including:
AutoExtract - API for automated e-commerce and article extraction from web pages using Machine Learning.
Crawlera - smart crawling proxy
Scrapy Cloud - a cloud platform for running spiders
Data on Demand - turn-key web scraping services and more!
Come join our fully remote team of over 180 people in 30 countries.
You'll have the chance to work on projects that build and transfer datasets of thousands of millions of records, as well as build the systems that deliver data to current Fortune 500 companies and startups building great products on top of our stack.
Scrapinghub has benefited from Open Source throughout our history. As a way to give back to the community everybody on our team has a chance to contribute to Open Source projects, find out more on Open Source at Scrapinghub: http://scrapinghub.com/opensource/.
- Senior Software Engineer (Big Data/AI): You will be designing and implementing distributed systems: large-scale web crawling platform, integrating Deep Learning based web data extraction components, working on queue algorithms, large datasets.
- DevOps Engineer: work closely with our Crawlera developers to make their lives easier through creating automations and handle everything around running, deploying and upgrading the application.
Python Developer: join our Delivery team to work on web crawler development with Scrapy, our flagship open source project.
Scrapinghub | https://scrapinghub.com | 100% Remote |Full-time | Multiple roles
Scrapinghub turns web content into useful data.
We develop a wide range products including: Crawlera - smart crawling proxy Scrapy Cloud - a cloud platform for running spiders Data on Demand - turn-key web scraping services and more!
We are hiring skilled Engineers for various positions including Spider development, web Scraping Research and Solution Engineer roles. Come join our fully remote team of over 180 people in 30 countries.
You'll have the chance to work on projects that build and transfer datasets of thousands of millions of records, as well as build the systems that deliver data to current Fortune 500 companies and startups building great products on top of our stack.
Scrapinghub has benefited from Open Source throughout our history. As a way to give back to the community everybody on our team has a chance to contribute to Open Source projects, find out more on Open Source at Scrapinghub: http://scrapinghub.com/opensource/.
- Enterprise Solutions Engineer: You will join the Crawlera team to assist Enterprise customers to achieve their business goals via Crawlera, and support the Sales team in achieving their quotas.
- Principal Reverse Engineer: You’ll be given the time and resources to quickly hack together proof of concepts, test them, and produce a knowledge base for other developers at Scrapinghub
Erlang Developer - You will learn to investigate production issues on a server executing customer requests. You will be able to navigate a large code-base and find the least obstructive place for extensions.
Open Source Maintainer - You will help us develop and maintain our Open Source software, to ensure Scrapy and other ScrapingHub Open Sources projects thrive.
Scrapinghub | https://scrapinghub.com | 100% Remote |Full-time | Multiple roles
Scrapinghub turns web content into useful data.
We develop a wide range products including:
Crawlera - smart crawling proxy
Scrapy Cloud - a cloud platform for running spiders
Data on Demand - turn-key web scraping services and more!
We are hiring skilled Engineers for various positions including Spider development, Web Scraping Proof of Concept and customer facing roles. Come join our fully remote team of over 160 people in 30 countries.
You'll have the chance to work on projects that build and transfer datasets of thousands of millions of records, as well as build the systems that deliver data to current Fortune 500 companies and startups building great products on top of our stack.
Scrapinghub has benefited from Open Source throughout our history. As a way to give back to the community everybody on our team has a chance to contribute to Open Source projects, find out more on Open Source at Scrapinghub: http://scrapinghub.com/opensource/.
- Python Developer (scraping): you will be in charge of designing, developing and testing Scrapy web crawlers.
- Enterprise Solutions Engineer: You will join the Crawlera team to assist Enterprise customers to achieve their business goals via Crawlera, and support the Sales team in achieving their quotas.
- Web Scraping POC: You’ll be given the time and resources to quickly hack together proof of concepts, test them, and produce a knowledge base for other developers at Scrapinghub
Interview process: 2 interviews and a technical trial project.
We develop a wide range products including: Crawlera - smart crawling proxy Scrapy Cloud - a cloud platform for running spiders Data on Demand - turn-key web scraping services and more!
We are hiring Python Developers, Support Engineers, Erlang Developer (Tech lead) and more to join our fully remote team of over 140 people in 30 countries
You'll have the chance to work on projects that build and transfer datasets of thousands of millions of records, as well as build the systems that deliver data to current Fortune 500 companies and startups building great products on top of our stack.
Scrapinghub has benefited from Open Source throughout our history. As a way to give back to the community everybody on our team has a chance to contribute to Open Source projects, find out more on Open Source at Scrapinghub: http://scrapinghub.com/opensource/.
- Python Developer (scraping): you will be in charge of designing, developing and testing Scrapy web crawlers.
- Support Engineer: Provide world class support for our Scrapinghub customers by investigating and resolving issues.
- Lead Erlang Developer: Join and lead our Crawlera team. Crawlera is a smart downloader designed specifically for web crawling and scraping. It allows crawler developers to crawl quickly and reliably by managing thousands of proxies internally.
Scrapinghub continues to grow significantly this year and we're looking for great additions to our team. Positions are fulltime (40hours per week) and fully remote.
Interview process: 2 interviews and a technical trial project.
Quick summary of some of the open positions (Check out our website for a full list):
- Python Developer (scraping): you will be in charge of designing, developing and testing Scrapy web crawlers.
- Support Engineer: Provide world class support for our Scrapinghub customers by investigating and resolving issues.
- Lead Erlang Developer: Join and lead our Crawlera team. Crawlera is a smart downloader designed specifically for web crawling and scraping. It allows crawler developers to crawl quickly and reliably by managing thousands of proxies internally.
About Scrapinghub: We're a fully distributed team with more than 140 Shubbers working from over 30 countries, who are passionate about scraping, web crawling and data science.
You'll have the chance to work on projects that harvest and transfer datasets of thousands of millions of records, as well as build some of the systems that will deliver data to current Fortune 500 companies and the startups that are building great products on top of our stack.
We have a very engineering-driven culture (two engineer-founders) and a great place to work if you're self-directed, curious, and interested in working in open source environments. More on Open Source at Scrapinghub: http://scrapinghub.com/opensource/.
Scrapinghub continues to grow significantly this year and we're looking for great additions to our team. Positions are fulltime (40hours per week) and fully remote.
Interview process: 2 interviews and a technical trial project.
Quick summary of some of the open positions (Check out our website for a full list):
- Python Developer (scraping): you will be in charge of designing, developing and testing Scrapy web crawlers.
- Support Engineer: Provide world class support for our Scrapinghub customers by investigating and resolving issues.
- Lead Erlang Developer: Join and lead our Crawlera team. Crawlera is a smart downloader designed specifically for web crawling and scraping. It allows crawler developers to crawl quickly and reliably by managing thousands of proxies internally.
About Scrapinghub: We're a fully distributed team with more than 140 Shubbers working from over 30 countries, who are passionate about scraping, web crawling and data science.
You'll have the chance to work on projects that harvest and transfer datasets of thousands of millions of records, as well as build some of the systems that will deliver data to current Fortune 500 companies and the startups that are building great products on top of our stack.
We have a very engineering-driven culture (two engineer-founders) and a great place to work if you're self-directed, curious, and interested in working in open source environments. More on Open Source at Scrapinghub: http://scrapinghub.com/opensource/.
Scrapinghub continues to grow significantly this year and we're looking for great additions to our team. Positions are fulltime (40hours per week) and fully remote.
Interview process: 2 interviews and a technical trial project.
- Python Developer (scraping): you will be in charge of designing, developing and testing Scrapy web crawlers.
- Backend Engineer: You will develop and grow our Crawling and Extraction services.
- Data Scientist: You will apply your data science and engineering skills to create products based on machine learning, analyze large volumes of complex data, model challenging problems, and develop algorithms to solve our internal and client needs.
About Scrapinghub:
We're a fully distributed team with more than 130 Shubbers working from over 30 countries, who are passionate about scraping, web crawling and data science.
You'll have the chance to work on projects that harvest and transfer datasets of thousands of millions of records, as well as build some of the systems that will deliver data to current Fortune 500 companies and the startups that are building great products on top of our stack.
We have a very engineering-driven culture (two engineer-founders) and a great place to work if you're self-directed, curious, and interested in working in open source environments. More on Open Source at Scrapinghub: http://scrapinghub.com/opensource/.
Scrapinghub continues to grow significantly this year and we're looking for great additions to our team. Positions are fulltime (40hours per week) and fully remote.
Interview process: 2 interviews and a technical trial project.
Scrapinghub is looking for Python Engineers, Erlang Developers, Test Engineers and more: https://scrapinghub.com/jobs
Quick summary of the open positions:
- Python Engineer (scraping): you’ll be in charge of designing, developing and testing Scrapy web crawlers.
- Test Automation Engineer: you will build automated test frameworks and ad hoc test scripts to assist in the verification and validation of data quality.
- Erlang Engineer/Tech Lead: you will lead our Crawlera team in developing and maintaining a high load distributed system.
We're a fully distributed team with more than 120 Shubbers working from 30 countries, who are passionate about scraping, web crawling and data science.
You'll have the chance to work on projects that harvest and transfer datasets of thousands of millions of records, as well as build some of the systems that will deliver data to current Fortune 500 companies and the startups that are building great products on top of our stack.
We have a very engineering-driven culture (two engineer-founders) and a great place to work if you're self-directed, curious, and interested in working in open source environments. More on Open Source at Scrapinghub: http://scrapinghub.com/opensource/.
Scrapinghub continues to grow significantly this year and we're looking for great additions to our team, wherever you're located! Positions are fulltime (40hours per week) and fully remote.
Interview process: 2 interviews and a technical trial project.
We are a globally distributed team of over 190 Zytans working from over 28 countries. We are on a mission to enable our customers to extract the data they need to continue to innovate and grow their businesses.
We are the Creators and lead maintainer of Scrapy, an open source web extraction framework used by over 1M developers.
Open roles: Senior Backend Engineer (Python, Scala, Java) - design and implement distributed systems: large-scale web crawling platform, integrating Deep Learning based web data extraction components, working on queue algorithms, large datasets, creating a development platform for other company departments.
Senior Backend Engineer (Python)- work on our customer facing application, tools and APIs to make it easier for data analysts and machine learning engineers to focus on finding insights while we handle their web data needs.
Senior Frontend Engineer (Angular) - design and develop our customer facing application
Internal Systems Python Developer - You will be responsible for developing and maintaining our systems which comprise mainly cloud based applications with custom integrations between them
Browser Engineer - Work as a member of the content fetching team to implement features in the custom browser, support downstream teams and developers by creating tools for better debugging and introspection.
Principal Reverse Engineer - use reverse engineering, static, dynamic or concolic analysis in conjunction with Zyte's best in class tools including Zyte's Smart Proxy Manager.
Developer Advocate - create articles, videos, tutorials and other types of content for Zyte as well as third-party sites and blogs. You’ll build long-lasting relationships with members of the community.
Please reach out to Jessica at jobs@zyte.com or apply via our website https://www.zyte.com/jobs/