Hacker Newsnew | past | comments | ask | show | jobs | submit | sseppola's commentslogin

So many people have much of their capital in housing which makes it extremely unpopular to try to lower the prices, so neither side is going to make this their top priority.


This makes me think of ‘Why a meritocracy is corrosive to society’ by Philosphize This! It was very interesting to hear the downsides expressed, and how ingrained it is in society to the point that we don’t even notice. Perhaps the most important point: how thankful we should be to have the skills to be highly valued in this environment, which will give you some empathy to the people who don’t. https://open.spotify.com/episode/7ASBhftzNrJnFL0NV3Iqtu?si=c...


for anyone who doesn't like Spotify or prefers YT

https://youtube.com/watch?v=TKQTbVT0kzQ

unfortunately the transcript is not yet on the website


Does anyone know how accessible lidar scans are for hobbyists? eg. I would love stick it to a drone to scan my backyard even if it just shows me the rocks underneath


Buy the oldest second hand version of an iPhone that supports Lidar. Doesn't matter if the screen/battery is busted. Strap it to a drone and have your custom app broadcast that data somewhere while you're flying around.

How big is your backyard anyway? Maybe you could just walk around it with a selfie stick instead.

In which case, code the app first, buy iPhone, try it out the same weekend, return for free. :)


Using iPhone 12 Lidar to Map Two Unconnected Underground Spaces

https://www.youtube.com/watch?v=YOuYuL8OJDg


Could you find pipes, coax cables and other things in your yard or wall s in your house with Lidar on the iPhone?


You should probably look into stud finders and metal detectors. I didn't even know this but apparently iPhones have a built in magnetometer so there are apps for that as well!


Talking to Humans is another great book in this genre. Short and succinct, with useful examples.


Cool thanks will definitely take a look +1


Best explanation I've heard was in Darknet Diaries about Zero Day Brokers, which was a fantastic listen! (https://open.spotify.com/episode/4vXyFtBk1IarDRAoXIWQFf?si=3...)

The short version is that if the bounties become too large they'll lose internal talent who can just quit to do the same thing outside the org. Another reason was that they can't offer competitive bounties for zero days because they'll be competing with nation states, effectively a bottomless bank, so price will always go up.

I don't know much about this topic, but surely there are some well structured bounty programs Apple could copy to find a happy middle ground to reward the white hats.


That explains the payout, but not the poor communication on the part of Apple.


this is the real reason. not anything internal/culture related

A good iOS 0-day is worth hundreds of millions of dollars in contracts with shady governments. Apple can't compete with that multiple times a year


This doesn't compute: is the claim Apple badly manages its bug-bounty because 0-days are too valuable? If that's the case, I'd expect the opposite effect: Apple would recognize how valuable the reports being sent to them by white-hats are, and would react with a sense of urgency and gratitude. As it is, Apple is behaving as if 0-days are worth very little, and not a big priority.


According to Zerodium, iOS exploits are cheaper than Android exploits because they are so plentiful in comparison.


Can you elaborate more on the "roles" of the "new stack"? To me dbt/dataform and airflow/dagster are quite similar, so why do you need one of each? fivetran/stitch/singer are all new


I've used all of these so I might be able to offer some perspective here

In an ELT/ETL pipeline:

Airflow is similar to the "extract" portion of the pipeline and is great for scheduling tasks and provides the high-level view for understanding state changes and status of a given system. I'll typically use airflow to schedule a job that will get raw data from xyz source(s), do something else with it, then drop it into S3. This can then trigger other tasks/workflows/slack notifications as necessary.

You can think of dbt as the "transform" part. It really shines with how it enables data teams to write modular, testable, and version controlled SQL - similar to how a more traditional type developer writes code. For example, when modeling a schema in a data warehouse all of the various source tables, transformation and aggregation logic, as well as materialization methods are able to to live in the their own files and be referenced elsewhere through templating. All of the table/view dependencies are handled under the hood by dbt. For my organization, it helped untangle the web of views building views building views and made it simpler to grok exactly what and where might be changing and how something may affect something else downstream. Airflow could do this too in theory, but given you write SQL to interface with dbt, it makes it far more accessible for a wider audience to contribute.

Fivetran/Stitch/Singer can serve as both the "extract" and "load" parts of the equation. Fivetran "does it for you" more or less with their range of connectors for various sources and destinations. Singer simply defines a spec for sources (taps) and destinations (targets) to be used as a standard when writing a pipeline. I think the way Singer drew a line in the sand and approached defining a way of doing things is pretty cool - however active development on it really took a hit when the company was acquired. Stitch came up with the singer spec and their offered service is through managing the and scheduling various taps and targets for you.


Airflow allows for more complex transformations of data that SQL may not be suited for. DBT is largely stuck utilizing the SQL capabilities of the warehouse it sits on, so for instance, with Redshift you have a really bad time working with JSON based data with DBT, Airflow can solve this problem. That's one example, but last I was working with it we found DBT was great for analytical modeling type transformations but from getting whatever munged up data into a useable format in the first place Airflow was king.

We also trained our analysts to write the more analytical DBT transformations which was nice, shifted that work onto them.

Don't get me wrong though, you can get really far with just DBT + Fivetran, in fact, it removes like 80% of the really tedious, but trivial ETL work. Airflow is just there for the last 20%

(Plus you can then utilize airflow as a general job scheduler)


Great resource, thanks for sharing it! I will dig deeper into the resources linked here as there's a lot I have never seen before. The main topics are more or less exactly what I've found to be key in this space in the last 2 months trying to wrap my head around data engineering in my new job.

What I'm still trying to grasp is first how to assess the big data tools (Spark/Flink/Synapse/Big Query et.al) for my use cases (mostly ETL). It just seems like Spark wins because it's most used, but I have no idea how to differentiate these tools beyond the general streaming/batch/real-time taglines. Secondly, assessing the "pipeline orchestrator" for our use cases, where like Spark, Airflow usually comes out on top because of usage. Would love to read more about this.

Currently I'm reading Designing Data-Intensive Applications by Kleppman, which is great. I hope this will teach me the fundamentals of this space so it becomes easier to reason about different tools.


Only if it stays public. It was hugely important for me learning web dev, still is, and I would like it to continue to be for everyone entering the profession


This is amazing, I found a great domain for one of my projects when checking it out. Thank you for building and sharing it!


May I ask, why not just use the official provider? https://github.com/hasura/ra-data-hasura



thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: