Hacker Newsnew | past | comments | ask | show | jobs | submit | ep_jhu's commentslogin

Nice! I like that it moves things to the recycling bin rather than straight up deleting them. Also, the filtering is great. I wish Apple included that in the OSX (Storage > Messages) or iOS (Storage > Review Large Attachments) UIs. It seems like a must-have.


Everyone here is thinking about privacy and surveillance and here I am wondering if this is what lets us speed up nano cameras to relativistic speeds with lasers to image other solar systems up close.


Thank you!

It's been a while since I've heard anyone talk about the Starshot project[0]. Maybe this would help revitalize it.

Also even without aiming for Proxima Centauri, it would be great to have more cameras in our own planetary system.

--

[0] - https://en.wikipedia.org/wiki/Breakthrough_Starshot



we would also need a transmitter of equivalent size to send those images back. also an energy source


Honestly even if they are size of a jellybean, it would be a massive boon for space exploration. Just imagine sending them for reconnaissance work around the solar system to check out potential bodies to explore for bigger probes later down the track. Even to catch interesting objects suddenly appearing with minimal delay, like ʻOumuamua.


Just do round trip!


We'll need even bigger[1] breakthroughs in propulsion if it's going to be self-propelling itself back to Sol at relativistic speeds.

1. A "simpler" sci-fi solution foe a 1-way trip that's still out of our reach is a large light sail and huge Earth-based laser, but his required "smaller" breakthroughs in material science


Well if you can propel something forward you can propel it backwards as well.

I'm assuming some sort of fixed laser type propulsion mechanism would leverage a type of solar sail technology. Maybe you could send a phased laser signal that "vibrates" a solar sail towards the source of energy instead of away.


> Well if you can propel something forward you can propel it backwards as well

Not necessarily - at least with currently known science. Light sails work ok transferring momentum from photons, allowing positive acceleration from a giant laser Earth. Return trip requires a giant laser on the other side.


As well as a way around Newton's Third Law.


I meant to say the "simpler" (but still very complicated) solar sail approach was for a one-way trip. On paper, our civilization can muster the energy required to accelerate tiny masses to relativistic speeds. A return trip at those speeds would require a nee type of science to concentrate that amount of energy in a small mass and use it for controlled propulsion.


My COVID project was DCSkyCam[1], which is on a Pi4 with the HQ camera to do sunrises/sunsets, and uses TFLite and other packages for object detection and identification of helicopters that fly by. I also have a Pi 3B running pihole[2] and running custom scrape jobs to alert me of certain website updates, which I’d like to transition to using selenium but the 3B is too slow for chromedriver - might upgrade to a Pi 5 if I can.

[1] https://dcskycam.net [2] https://pi-hole.net/


This is way cool.


One issue I always run into when implementing these approaches is the embedding model's context window being too small to represent what I need.

For example, on this project, looking at the generation of training data [1], it seems like what's actually being generated are embeddings on a string concatenated from each review, title, description, etc. [2]. With the max_seq_length set to 200, wouldn't lengthy book reviews result in the book description text never being encoded? Wouldn't this result in queries not matching against potentially similar descriptions if the reviews are topically dissimilar (e.g., discussing author's style, book's flow, etc. instead of plot).

[1] https://github.com/veekaybee/viberary/blob/main/src/model/ge... [2] https://github.com/veekaybee/viberary/blob/main/src/model/ge...


I have the same problem with a project I'm working on. In my case I'm chunking the documents and encoding the chunks. Then I do semantic search over the embeddings of the chunked documents. It has some drawbacks but it's the best approach I could think


In 1966 NASA launched two rockets to LEO as part of the Gemini 12 mission - under 2 hours apart (Gemini, Athena docking target.)

As for SpaceX, just a few months ago the Psyche asteroid mission and a Starlink launch also happened the same day.


There’s already been hacks with 100k user account data leaked: https://www.bleepingcomputer.com/news/security/over-100-000-...

OpenAI says it was an issue with the user devices not their service, though.


For my DCSkyCam project, htop shows 1.8 load average on my Pi 4B. It's always busy, with resources left to handle spikes based on what the camera detects (or post to mastodon/website). I'd say it's right-sized. Best part - not throttled, even without a case, fans, or even heatsinks. That probably wouldn't be the case with a Pi 5.


The account I use for my @dcskycam Twitter bot has been locked a few times for posting sunset videos flagged as inappropriate and the account itself was “permanently suspended for repeated violations” only to be un-suspended almost immediately.

This is just a hobby project for me but if it was revenue generating I’d be looking for other platforms rather than starting new projects on such an unstable platform!


This is great! I've done something similar, but to detect helicopters and tweet them to a local copter spotting group: https://twitter.com/dcskycam/with_replies

I also use the Pi+HQ Cam, but with their wide-angle lens, and I do all the object detection (and heli type classification for a handful of types) on-device using TFLite.

How do you not run up a Rekognition bill?! One of my project goals was to have the total cost be as low as possible, so while I do use AWS for the website hosting (S3+R53+CF) it's $0.60/mo - everything else is on-device. With Rekognition, processing a file every 2 seconds, it would add up!


Thanks! (this is my project).

Your copter project looks great! I’ll have to check out TFLite!

I’m only running twice a minute and only during daylight hours. These large boats move pretty slow, so it’s not terrible…. At $1 per thousand calls, it’s a little less than a dollar a day.

I would be interested in trying out on device classification at some point both for reducing the cost and more frequent image capture (the boats aren’t that fast but it would be nice to get the “perfect” framing).

Thanks for your comment and thanks again for sharing your project — very cool!


Really cool project. I've lived in DC for a bit and came across Helicopters of DC[1], whom have gamified spotting copters across the city.

The model they use is publicly available, too, to call via API for free: https://universe.roboflow.com/helicoptersofdc/helicopters-of...

Have you thought about collaborating with them on data/models?

[1] https://twitter.com/HelicoptersofDC


Not to hijack the OP, but yes, my Twitter bot actually tweets spots to his bot. My DCSkyCam data this helps train his models, but another objective for me was to learn image ML techniques, so I’m using all my own data/models.


I would love to read or watch any content you have created about this project. It’s literally a goal of mine, to build something similar. I love helicopters and live under a flight path that’s used for by the rescue choppers, and I run outside to watch them every time. I’m going to start collecting ADS-B data also; I can imagine that’s complimentary to your efforts.


I'm running a Twitter bot that tweets out sunrise/sunset timelapses, and also does on-device object detection and classification to tweet out helicopters that fly by the field of view.

https://twitter.com/dcskycam/with_replies

It's a Pi 4B with the HQ Cam (6mm lens). The ML models are trained on my MBP, converted to tflite, and run on the pi itself.

The main use case is the timelapses, but after seeing that that most of the helicopters that flew by weren't transmitting ADSB, I figured I could help out the @helicoptersofdc crowdsourcing project (run by someone else) by contributing heli spots in an automated manner that might otherwise go unreported.

I also have a 3B+ running pihole on the LAN.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: