Hacker Newsnew | past | comments | ask | show | jobs | submit | IamNotAtWork's commentslogin

Therapy wasn't for me. I had a hard time opening up myself to complete strangers. Moreover, you need a lot of sessions to make a difference, which equates to a ton of money. So for years I have been taking meds, which general doctors are happy to prescribe.

Recently I got completely off of it. I know my body well enough to know that I can, so I did. I did it because I was unhappy with how many side effects it had, while not really making me happier... just more level I suppose. I still lacked a desire to do things that I used to love while on meds.

I find that two things really shape how well I feel. 1) sleep. Lots of it, and good quality too. 2) exercise. It helps to have exerted a lot of energy so that sleep is better. If I haven't exercised for that day sleep will be hard which makes more more depressed and work less effectively the next day.

Like others have mentioned, you need to find a therapist that clicks with you.

Prioritize your health over your work. Work 8 hours and no more. If project might get delayed then so be it. Your PM will have to allocate more resources or fire you. Have enough money saved up so that even if you get fired you aren't screwed. If your stress isn't caused by work, then something else, but since you are asking on HN I suppose it is related to work, and the isolation that it brings.


Right, I think they said your content isn't monetized unless you have over 10k subscribers and X number of uploaded videos. This makes it hard to make money off of your cat, for example, as people click and share but don't subscribe. It's also harder to have a stream of these viral videos.

It's kind of sad that it went this way but I think it can open opportunities for alternatives to youtube.


Her site is 503 at the moment.


What was the root cause of the problem that motivated this visualization tool?


It's described in the blog post under "Origin", and the sample profile that we ship with FlameScope (examples/perf.stacks01) and which I used in the video is the first minute of the original problem.

It was intermittent latency that was hard to explain using existing tools.


Are these courses being taught by graduate students? The three main instructors seem like they are students themselves, with an army of under and grad TAs.

Pity that you pay so much money to attend Stanford only to be taught by your peers. Not knocking on Stanford as this is how is being done much everywhere in the undergrad level now.


In the first few lectures of the course you get a pretty good history of deep learning and you'll see it didn't really take off until around ~2012. And the reasons for it taking off is mostly because people are getting better at the black magic of training a deep network.

So these grad students are exactly the people you want to learn from because they have done the dirty work of fiddling with parameters to know what tricks work and what doesn't. It's probably preferable to a more theory-heavy course because very few people (not even the more experienced professors) understand why those tricks work.

Note: I took an older version of the course which was started by Andrej Karpathy who was a grad student at the time but is now the Director of Artificial Intelligence at Tesla.


Can you elaborate on what in CV became obsolete overnight? I took a survey course in CV but I haven't kept up. You still do facial detection, object recognization, camera calibration, image stitching the same way in 2012? Or has it changed because the processing has gotten faster and the results are near real-time?


We used to make features detectors manually

E.g. https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature...

These were at the root of many detectors. They still are for some applications but for most of them, a few layers of CNN manage to train far better and very counter-intuitive detectors.

Facial detection/recognition was based on features, this is not my specialty, I don't know if DL got better there too as their features were pretty advanced but if they are not there yet I am sure it is just a matter of time.

I can see image stitching benefiting from a deep-learned filter pass too.

Camera calibration is pretty much a solved problem by now, I don't think DL adds a lot to it.

Like I said, not everything became obsolete, but around 50% of the field was taken over my DL algorithms where, before that, hand-crafted algorithms had usually vastly superior performances.


Just to confirm for the facial recognition/ detection, modern DNN algorithms outperform the 'classic' methods that took decades of continuous improvement ...


Hashbase seems to function as one of the nodes in the DAT network, so even if none of the peers are online at least Hashbase is still there. That's what I gathered from a quick scan.

The DAT network seems interesting and is new to me, but like you said, needing a central node to cache the data seems like breaking the p2p story.


Hashbase is a convenience feature, bridging the gap, so to speak, between p2p and centralised systems. It's not required at all for dat:// to work.

The main benefit is that users of http browsers can visit your dat:// site.


I wonder if part of the agreement with Intel, AMD agreed to not bring out Ryzen 7 APU with better graphics to keep that segment competition-free? Currently there are no plans for anything higher than Ryzen 5 in the APU space.

Would that be anti-competitive behavior or just part of business?


IANAL but that sounds extremely illegal.


Would it be illegal for AMD to agree not to enter a competing part in that segment though? It's not as though they're colluding on price or squashing other competitors.


Apple probably needs a datacenter of its own anyway. Just think about all of the data they use internally for projects, and other sensitive stuff they would never allow to leave the network. So building a state of the art DC with low PUE makes sense.

However, farming out some of the costs to other cloud providers seems like a good strategy to eliminate single point of failure, or avoid all data being lost if somehow one provider loses data. And maybe then they can focus on adding compute units rather than storage, and backup for the storage units.

In short, despite Apple's user base, I still don't think it is on the scale of AWS or Google.


I really don't like state machines because if you need to add a new event, each of the states need to be updated to handle that event. If you add a new state, you have to figure out how to handle each of the transitions from other states. So as your states grow, the maintenance on the developer's side grows faster than linear.

Additionally, I find that I cannot understand how the program works without actually drawing out the state machine diagram if I come across it the first time, so there is a bit of a learning curve. It's also a nightmare to test because of all of the states that need to be tested.

So in summary, not a fan. Like recursion... if it feels natural then use it, but I don't go and try and turn stuff that isn't a SM into one or turn something that can be done as loops into recursive function for fun.


>I really don't like state machines because if you need to add a new event, each of the states need to be updated to handle that event. If you add a new state, you have to figure out how to handle each of the transitions from other states. So as your states grow, the maintenance on the developer's side grows faster than linear.

State machines have been designed to neatly capture and componentize exactly that.

Anything else is less explicit, and even more error prone.

That new even and those new transitions you've mentioned? You still need to handle them anyway -- only you do it in an informal manner without a SM.


For the life of me, I can't read your complaints as actual complaints, but endorsements of the idea.

I haven't done it, yet, but I am tempted to make everyone I work with draft out the state machine of everything we are working with in our systems. I'm half convinced the worst bugs we have, are when folks didn't realize that the change they were doing required modifications because of how far reaching they were in the state of the system.

I take a simple vending machine as a good thought exercise. If you are just changing the system that recognizes coins, you can easily localize your changes. If you are changing the system that accepts coins...


This.

How much of the anti-formalism attitude is about real difficulties of the method, and how much is about "I don't know this method, so it must be bad".

Code, regardless of what it does, is a state machine. That does not change whether you do it intentionally or not. And if you don't use state machine design tools to design them ... then your state machine becomes a huge mess with transitions going from everywhere to everywhere and very surprising connections in many places (and most/all of those will be bugs, bugs that no unit test ever is going to find).

Which seems to be acceptable for a lot of people.


Both of your concerns seem to be nicely addressed by statecharts (nesting to constrain relations + visualization): https://medium.freecodecamp.org/how-to-model-the-behavior-of...

(haven't used this yet)


> if you need to add a new event, each of the states need to be updated to handle that event

Depends on your formalism. I never use state machines of that form for exactly the reason you say. Rather, each state defines the conditions which cause a transition from it. Receiving an event in a state in which it is not expected (say, an I/O completion in a state which should not have outstanding I/O) is a straight-up hard error.


This idea of defining a hard error for a transition which makes no sense is a nice way to deal with the OPs problem. You must still explicitly handle the situation, but it minimizes the "unwind/undo" code.


I don't understand the issue very well... If you have a new event, you either have to handle it in the state machine or some other way. Any place where the event doesn't apply should cause an appropriate failure. Same with new states - you have to write those transitions in some way. You can use macros/abstractions/whatever for simplifying many cases. But none of that code really disappears when you don't use SM.


I think the concern is that the update is across the code -dispersed in the code base. But this only applies to legal events in all states else the default behavior should be invoked. If a new event that is legal in many states is created then you have to handle it everywhere anyway, and my experience that if then structures fill with bugs fast in this case


That faster than linear maintenance growth is exactly why you need SMs.

You don't need to test every scenario - only the critical positive/negative ones.

If you abstract out all the generic SM logic you can have a neat source file per SM that contains only the allowed transition mappings or do something like https://github.com/pluginaweek/state_machine where the transitions are contained behind methods.

My experience with recursion has been the reverse - the more recursion in use the less predictable the code behaves (at least initially) - but even just a little bit of SM usage can increase stability from the start and also forces you to think about all the states required.


What's wrong with drawing the state machine diagram? And generally, what's wrong with drawing your code structure?

You may be able to keep it all in your head when you write it the first time... try again when you're doing maintenance after 6 months of not touching it though.


> I really don't like state machines because if you need to add a new event, each of the states need to be updated to handle that event. If you add a new state, you have to figure out how to handle each of the transitions from other states

You still have to do all this without state machines, but likely in a less organized and harder to maintain way.


>I really don't like state machines because if you need to add a new event, each of the states need to be updated to handle that event

Depends but usually no. An event in a state which has no transition from it would be an error. The state becomes undefined and you produce a crash.

>If you add a new state, you have to figure out how to handle each of the transitions from other states.

Yes but only states that transition to this new state which is easily formalized.

>Additionally, I find that I cannot understand how the program works without actually drawing out the state machine diagram if I come across it the first time, so there is a bit of a learning curve

SM diagrams are fairly easy, they were mandatory course material in my second semester at university.

>It's also a nightmare to test because of all of the states that need to be tested.

In fact, the opposite is usually true. You can mathematically verify that your state machine will always behave exactly as expected or crash. The coffee machine won't dispense coffee and return the cash put in; the state machine in it does not allow it, even better, such a series of events becomes utterly impossible. Once the coffee has been dispensed, the machines has only one way forward: initial state.

Additionally, SMs allow you to verify that your specific implementation is the most optimal one. And if it isn't, you can easily derive it. And you can test if two independent SM implementations are equivalent to eachother with 100% certainty.

Sadly, since their state is very limited, they are usually not very useful once you want to do something that can't be expressed in a finite state machine (basically anything with threads, ever, to start with).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: