We build physically intelligent software that enables industrial robots to perform complex manipulation tasks in the food industry - things that were previously impossible to automate. Our systems combine advanced ML, computer vision, and control algorithms to make robots truly adaptable to real-world variations.
We're looking for a Senior Robotics Engineer to help architect and implement our core technology stack. You'll work directly with industrial robots, design perception and control systems, and help shape our technical direction as an early employee.
At ETH we worked on an analog pressure gauge reading project. In the larger project, we wanted to read analog pressure gauges in oil refineries. https://github.com/ethz-asl/analog_gauge_reader
A year or two ago, I interviewed a developer who worked for one of the companies trying to build these types of optical blood pressure sensors. I have high blood pressure so I was keen to learn more. The gist of his message was that even in excellent conditions, it was very inaccurate.
The one thing that is painfully obvious to anyone who has rented an electric vehicle in a foreign country, is that charging those things is close to impossible.
Most charging networks require a specific app, that often can only be downloaded in that country's App Store. If you do manage to download the app for a network, signing up can fail because it requires a local address or phone number. Eventually, you'll find a network that does allow foreigners to sign up, but realize that their closest charger is slower and 10 miles away. Often the chargers will show up as available in the app, but actually not be functional and there is no-one at the site to help, even if it's a gas station. They'll just roll their eyes and say it's not their charger. Call the network.
At least gas you can buy with cold hard cash or by simply swiping your credit card at a gas station. No need to input your pronouns into an app.
A good example how society sometimes regresses on some fronts.
These methods learn directly the radiance, in a way which is not lighting dependent. Therefore they would render the same and shadows are not taken into account. Current methods aren’t able to do this, but maybe some day we might discover methods which could infer and take into account some lighting parameters.
Hey everyone! Here is an app I've been working on over the past couple years for my own purposes. The original idea came from me wanting to collect RGB-D scans for research and development purposes. One option being to buy a depth camera and use that, but I figured I might as well just use the time-of-flight sensor on my phone.
Over the years, tons of users from all over the world have reached out to me asking for the source code, feature requests or adapting it in some way. The users are usually computer vision researchers at universities or corporate research labs or they are engineers working on commercial projects.
Sometimes the requested changes don't make sense for other users. Sometimes they do, but I don't have the time to develop them.
I've just now decided to just release the source code so that people can hack it to do whatever they want with it. Hopefully people will find it useful and that some people might contribute changes back to the app and it becomes more useful for others as well.
In terms of advice, I would for sure start by adding GPS logging into the app. The app logs the ARKit odometry information, which could be a good starting point, but don't know how well that would perform with all that vibration.
It would for sure be a fun experiment, but I think given the extreme conditions, it might be easier to build a custom SLAM sensor rig with a high-quality IMU and several synchronized cameras.
That sounds awesome. I do some mountain biking myself and have sometimes thought that it would be cool if you could replay your rides in 3D through a sparse SLAM point cloud. Of course it would very hard to run SLAM onboard an MTB as there is so much vibration and changes in lighting.
If you could have an accelerometer read, like from the phone running the app, mitigate vibration offsets in measurements that would be cool. This is all beyond my capabilities, but I'd pay some HNer is they can build a LIDAR scanner I can mount to bike...
Even if it was possible from a legislative perspective, European cities are also considerably harder to drive in than American cities that are cars first with wider lanes and clear grid layouts. Most parts of Europe also have more variable weather conditions.
But yeah, it is indeed also sad that Europe does not have a single credible horse in the race.
Weather isn't really an issue. People would just test in the wide, sunny streets of Madrid (or some other overgrown, southern city) if they wanted to avoid those sorts of issues. Europe simply isn't seen as viable for development right now.
The regulatory environment is certainly noted and prepped for though.
We build physically intelligent software that enables industrial robots to perform complex manipulation tasks in the food industry - things that were previously impossible to automate. Our systems combine advanced ML, computer vision, and control algorithms to make robots truly adaptable to real-world variations.
We're looking for a Senior Robotics Engineer to help architect and implement our core technology stack. You'll work directly with industrial robots, design perception and control systems, and help shape our technical direction as an early employee.
Tech stack: Python, C++, ROS, PyTorch, industrial robotics platforms
- Good robotics fundamentals (perception, planning, control)
- 1-2+ years hands-on experience with industrial robots or similar systems
- ML/computer vision experience is valuable
- Must be based in or willing to relocate to Zurich
Why us:
- Work on cutting-edge robotics problems with immediate real-world impact
- Significant equity package
- Shape technical architecture and company direction as an early employee
- Central Zurich office location
Apply: join@witty-machines.com