Hacker Newsnew | past | comments | ask | show | jobs | submit | mikaraento's commentslogin

You do know that something similar is true for JPEG, right? :)

JPEG is a compression method. Files with JPEG-compressed data are most likely to be in either JFIF or EXIF container formats. Both will almost always use the .jpg/.jpeg file extension.


Nit: RDP’s roots are more in multi-user windows like Citrix Metaframe than in remote administration. I’ve found it to perform better than the alternatives (remote X11, VNC, Chrome Remote Desktop) for remote GUIs. Nomachine is the only alternative that was close to its performance.

(And before somebody jumps in to correct me - in ancient times X11 performed quite well over the network but modern Linux GUI apps are no longer designed to minimise X11 network traffic)


From my personal experience (by feel, not scientific), NVIDIA GameStream is way faster than RDP. I used it with Sunshine and Moonlight.


Things made for game streaming will be more responsive at the tradeoff of massive bandwidth usage in comparison. RDP can work over slow connections reasonably well.


Massive bandwidth usage and video compression artifacts. It's fine for games and media consumption, but may be problematic for office work.

Remote desktop protocols prefer lossless compression to achieve pixel-perfect rendering, at the expense of framerate/latency.

RDP is unique in that it's not just streaming, but integrates with Windows' GUI stack to actually offload compositing to the client. This however works less and less well with web and Electron apps which do not use native OS widgets.


Agreed that RDP is very well designed. And we don't have an equivalent in Linux or Mac world. All competing protocols are a compromise. I am particularly impressed with good multi-monitor support in RDP. Competitions has had more than a decade to get it right. But I am unaware of any that does.

If I connect remotely from a 2 monitor setup, disconnect and re-connect from my laptop with just a single display, it all magically works. Everything readjusts automatically. I don't know of any other remote desktop protocol/tool that does this so well.


I’m sorry you went through that


Relatively common in Finland to use young nettles like you’d use spinach in hot dishes (soup, blanched, pancakes).


> pancakes

Is this a frittata-style baked pancake? I've made rye pannukakku from the family cookbook here in the US Midwest but never seen any Finnish pancake with spinach or nettle.


Not quite. You take Finnish pancake batter (unleavened, a bit thicker than French crepes) and add blanched, finely chopped spinach or nettles.

https://scandicuisine.com/stinging-nettle-pancakes/ looks quite reasonable though most Finns would not use a blender for this.


I can be that someone this time. The ”repair and protect” version has helped my low-level toothache.


Heads-up to people trying this: Gmail will often put forwarded email into spam. Be careful especially in the beginning to check your spam folder. They may also reject the mail as spam, esp if your volume is large.

IIUC it’s hard to make forwarding to play nicely with DKIM and spf. There’s some disagreement on how to handle it. (I’m being purposefully vague as I did interact with folks handling this on the google side and don’t want to cause them trouble for helping me out).


There are also rate limits on Google side for incoming mails. By just forwarding four domains to my Gmail I used to hit them quite often. Then 2 years ogo, I stopped forwarding and switched to the now discontinued Gmail fetching domain mails over POP...


free email forwarding was one of the most useful features of Google Domains before selling... since the selloff, I've configured the domains to use Cloudflare which offers the same feature relatively transparently. I haven't seen too many issues with non-spam going into my spam, even relayed mail.

Aside: I do have a dedicated ip/vm for mailu setup, and will likely switch to using a vanity email as my catch-all instead of gmail soon enough. It's kind of sad how generally bad email has become at this point. Will also likely start playing with a few different self-hosted webmail clients, I'd considered and played with Nextcloud, just not sure how much I care for it or not.


The article points out that the human hand has over 10000 sensors with specific spatial layout and various specialised purposes (pressure / vibration / stretching / temperature) that require different mechanical connections between the sensor and the skin.


You don't need all those for most tasks modern tasks though. Sure if you wanna sew a coat or something like that, but most modern day tasks require very little of that sort of skill.


the Nature limited us to just 2 hands for all tasks and purposes. The humanoids have no such limitations.

>10000 sensors with specific spatial layout and various specialised purposes (pressure / vibration / stretching / temperature) that require different mechanical connections between the sensor and the skin.

mechanical connection wouldn't be an issue if we lithograph the sensors right onto the "skin" similarly to chips.


Sorry, I meant to emphasize _different_ mechanical connections. That a sensor that detects pressure has a different mechanical linkage than the one detecting vibration. So you need multiple different manufacturing techniques to replicate that at correspondingly higher cost.

The “more than 10000” also has a large impact in size (sensors need to be very small) and cost (you are not paying for one sensor but 10000).

Of course some applications can do with much less. IIUC the article is all about a _universal_ humanoid robot, able to do _all_ tasks.


Depends heavily on the use case. Indeed many tasks humans carry out are done without touch feedback - but many also require it.

An example of feed-forward manipulation is lifting a medium-sized object. Classic example is lifting a coffee cup. If you misjudge a full cup for empty you may spill the contents before your brain manages to replan the action based on sensory input. It takes around 300ms for that feedback loop to happen. We do many thing faster than that would allow.

The linked article has a great example of a task where a human needs feedback control: picking up and lighting a match.

Sibling comments also make a good point on that touch may well be necessary to learn the task. Babies do a lot of trial-and-error manipulation and even adults will do new tasks slower first.


The industry's approach to "trial and error to learn the task" is to have warehouses of robots perform various tasks until they get good at them. I imagine that you'd rely on warehouses less once you have a real fleet of robots performing real tasks in real world environments (and, at first, failing in many dumb and amusing ways).

Robots can also react much faster than 300ms. Sure, that massive transformer you put in charge of high level planning and reasoning probably isn't going to run at 200 tokens a second. But a dozen smaller control-oriented networks that are directly in charge of executing the planned motions can clock at 200 Hz or more. They can adjust fast if motor controllers, which know the position and current draw of any given motor at any given time, report data that indicates the grip is slipping.


Do you know of success stories here? Success of transferring models learned in physics simulation to the real world.

When we (ZenRobotics) tried this 15 years ago a big problem was the creation of sufficiently high-fidelity simulated worlds. Gathering statistics and modelling the geometry, brittleness, flexibility, surface texture, friction, variable density etc of a sufficiently large variety of objects was harder than gathering data from the real world.


We have massively better physics simulations today than 15 years ago, so the limitations you found back then don't apply today. It might still not be enough, but 15 years is such a long time with Moore's law and we already know all the physics so we just needed more computation to do what is needed.

Example of modern physics simulation: https://www.youtube.com/watch?v=7NF3CdXkm68


Google has done training in simulation: https://x.company/projects/everyday-robots/#:~:text=other%20...

I believe this is the most popular tool now: https://github.com/google-deepmind/mujoco


Thanks for the links.

AFAICT these have not resulted in any shipping products.


Around 2008 a core step in search was basically a grep over all documents. The grep was distributed over roughly 1000 machines so that the documents could be held in memory rather than on disk.

Inverted indices were not used as they worked poorly for “an ordered list of words” (as opposed to a bag of words).

And this doesn’t even start to address the ranking part.


It seems highly unlikely that they did not use indices. Scanning all documents would be prohibitively slow. I think it is more likely that the indices were really large, and it would take hundreds to thousands of machines to store the indices in RAM. Having a parallel scan through those indices seems likely.

Wikipedia [1] links to "Jeff Dean's keynote at WSDM 2009" [2] which suggests that indices were most certainly used.

Then again, I am no expert in this field, so if you could share more details, I'd love to hear more about it.

[1] https://en.wikipedia.org/wiki/Google_data_centers

[2] https://static.googleusercontent.com/media/research.google.c...


I worked on search at Google around that timeframe, and it definitely used an index. As far as I know, it has from the very beginning.

You can solve the ordered list of words problem in ways that are more efficient than grepping over the entire internet (e.g. bigrams, storing position information in the index).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: