Hacker Newsnew | past | comments | ask | show | jobs | submit | more ratedgene's commentslogin

can you point me to some materials that are easy to digest?


Easy to digest to me is a matter of process plus order. Just because you can boof wine/ai and feel a greater high than if you just took a sip doesn't mean you should. I'd start by setting the table and throwing out your last meal. start over in the 60s and work forward.

stafford beer on cybernetics (also worth mention, norbert weiner): https://www.youtube.com/watch?v=JJ6orMfmorg

Lots of other people start w/ other things but i'm a mgmt minded person so a social engineering + psychology + anthropology oriented lens has always been my anchor.

My first real intro to math where everything clicked was w/ primitive graph theory as it were 2000+ years ago. From there, algebra, geometry, trig, calc, etc# started clicking.


love norbert weiner, fantastic. the great cyberneticians of their times were such far future forward visionaries it's a bit astounding.


A. Demski's illustrated, more blog-style (rather than papers) works online.


What I mean is that we want something like set theory merged with weak computability theory (e.g. Kolmogorov complexity) but set means environment, embdded agent means element inside set, and then everything else gets built on top of that theory. This may be a an infinitely large -feeling work because there are many rules & interactions & games, but you are essentially answer what does it mean to be a rational embdded agent that's part of an environment, that probably doesn't have the exact same wants, in the abstract & formal sense of the inquiry.

Once you start examining scenarios where there are multiple clones of you that you perhaps cannot tell apart, your memory has been limited, you are facing larger amounts of pain & pleasure that you can handle without going insane, everyone wants the same thing but you also want something different, or someone gets mind-reading powers that might be partial and only work when they reveal their thoughts... then you are analytically working towards the end goal of building a larger catalogue of "everything" & making a theory & terminology out of that.

I would dedicate 2-20 years on the formal work but there's no funding I could get I know of & besides couldn't promise to yield results since foundations yields no results until it does if it ever does.

---

A related note: Kolmogorov complexity & finitism go along beautifully but the bit-lifting to calculate memory computational process takes insofar is out of my capacity. Nevertheless you can define a maximum number to ever allowed to exist in a system as Church number & have then perhaps the minimum amount of logical operations required to get there to be the maximum amount of memory your operations can take, closing the system since you lose memory in say substraction to a, b, the process initial size (hello, Kolmogorov) & process memory consumed at most (in any procedure, the smallest procedure being the number itself).


TikTok is rife with russian sock puppet accounts run by trolls that try and persuade people. Look at any of the political posts and then check out the accounts posting pro-red and anti-blue sentiments. There is a clear pattern to their accounts.


We have to be careful, it cuts the other way too.

Part of the propaganda is also showing extreme left content AND extreme right. The goal is not pushing left or right, the goal is divisiveness.


We have to be careful that once authoritarian rule is established globally we won't really be able to get out of it. Right now, using social media to make that happen seems to be extremely effective and because democracies believe in the idea of "free speech at all costs" we really have nothing to stop this.

The answer was probably better education which needed to start 20 years ago in order to counter this but we didn't do it and now we really have no answers but to elect every puppet they push.


That's true to an extent, but the trend is definitely leaning towards one side more than the other. I'm pretty sure that's just because of where the traction leads the algos to push the bots. Is one side more susceptible than the other is a question for a different thread, but I don't know how to attempt to learn the answer without fanning the flames of either side.


that’s my instagram


Hey, I wonder if we can use LLMs to learn learning patterns, I guess the bottleneck would be the curse of dimensionality when it comes to real world problems, but I think maybe (correct me if I'm wrong) geographic/domain specific attention networks could be used.

Maybe it's like:

1. Intention, context 2. Attention scanning for components 3. Attention network discovery 4. Rescan for missing components 5. If no relevant context exists or found 6. Learned parameters are initially greedy 7. Storage of parameters gets reduced over time by other contributors

I guess this relies on there being the tough parts: induction, deduction, abductive reasoning.

Can we fake reasoning to test hypothesis that alter the weights of whatever model we use for reasoning?


Maybe I'm just complicating unsupervised reinforcement learning, and adding central authorities for domain specific models.


Love the word "Autopoietic", nobody really knows about it and any text that uses it for sure will capture my interest.

I've first thought of this within the concept of self-assembling autonomous agents in 2016. Good times dreaming about a future where AI permeates every facet our lives.


Then you might really like the book I've first learn the world from, "Intelligence Emerging"[1] by Keith L. Downing. It's a very dense book about self-organizing processes, and emergence in general, and it drives me a bit crazy because it's one of my favorite book, but I never heard anyone mention it.

[1] https://mitpress.mit.edu/9780262536844/intelligence-emerging...


Awesome rec. I'll check it out :)


Having AI permeate _every_ facet of life seems horrifying to me.


May I ask why? I'm sincerely asking, no intention of flaming or trolling.

I think we're at a point where "online" stuff already permeates every facet of our lives. And many of these systems already employ "AI" (for very limited definitions of AI) at every step. You search based on embedding and language models. You get ads based on graph theory. You see friend's posts based on this. You get approved for a loan based on "AI", hell even some legal cases are handled by extremely badly implemented "AI" systems, and so on and so on.

I feel we're slowly approaching a phase where we could get that "sci-fi" like "personal assistant" that maybe can have access to all of our data, and can "act" in our best interests. Maybe when our data is considered, the "AI" assistant could have a say. Maybe it gets to decide when and how to share stuff. Maybe it gets access to the underlying algorithms and decides when and where to "agree" or "accept" our data being used for the average / median interest.

It seems plenty of systems already use that data in day-to-day life. I'm looking forward to having systems where the good parts can continue while the concerning parts (control over data, control over algorithms) is somehow limited. It's probably too much for a human, but I can see how we could all have "agents" that follow some of our interests and have a say in the process. It certainly seems closer than "sci-fi", closer than two decades ago.


I was talking to a teacher today that works with me at length about the impact of AI LLM models are having now when considering student's attitude towards learning.

When I was young, I refused to learn geography because we had map applications. I could just look it up. I did the same for anything I could, offload the cognitive overhead to something better -- I think this is something we all do consciously or not.

That attitude seems to be the case for students now, "Why do I need to do this when an LLM can just do it better?"

This led us to the conclusion:

1. How do you construct challenges that AI can't solve? 2. What skills will humans need next?

We talked about "critical thinking", "creative problem solving", and "comprehension of complex systems" as the next step, but even when discussing this, how long will it be until more models or workflows catch up?

I think this should lead to a fundamental shift in how we work WITH AI in every facet of education. How can a human be a facilitator and shepherd of the workflows in such a way that can complement the model and grow the human?

I also think there should be more education around basic models and how they work as an introductory course to students of all ages, specifically around the trustworthiness of output from these models.

We'll need to rethink education and what we really desire from humans to figure out how this makes sense in the face of traditional rituals of education.


> When I was young, I refused to learn geography because we had map applications. I could just look it up. I did the same for anything I could, offload the cognitive overhead to something better -- I think this is something we all do consciously or not.

This is certainly useful to a point, and I don't recommend memorizing a lot of trivia, but it's easy to go too far with it. Having a basic mental model about many aspects of the world is extremely important to thinking deeply about complex topics. Many subjects worth thinking about involve interactions between multiple domains and being able to quickly work though various ideas in your head without having to stop umpteen times can make a world of difference.

To stick with the maps example, if you're reading an article about conflict in the Middle East it's helpful to know off the top of your head whether or not Iran borders Canada. There are plenty of jobs in software or finance that don't require one to be good at mental math, but you're going to run into trouble if you don't at least grok the concept of exponential growth or have a sense for orders of magnitude.


Helpful in terms of what? Understanding some forced meme? "Force this meme so you can understand this other forced meme." is not education it's indoctrination. And even if you wanted to, for some unknown reason, understand the article you can look at a (changing and disputed) map as the parent said.

This is the opposite of deep knowledge, this is API knowledge at best.


Are you referring to: > if you're reading an article about conflict in the Middle East it's helpful to know off the top of your head whether or not Iran borders Canada ?

Perhaps, but in the case that you are I think it's a stretch to say that the only utility of this is 'indoctrination' or 'understanding this. other forced meme'. The point is that lookups (even to an AI) cost time, and if you have to do one for every other line in a document, you will either end up spending a ton of time reading, or (more likely) do an insufficient number of lookups and come away with a distorted view of the situation. This 'baseline' level of knowledge IMO is a reasonable thing to expect for any field, not 'indoctrination' in anything other than the most diluted sense of the term.


I think at a certain point, you either value having your own skills and knowledge, or you don't. You may as well ask why anyone bothers learning to throw a baseball when they could just offload to a pitching machine.

And I get it. Pitchers who go pro get paid a lot and aren't allowed to use machines, so that's a hell of an incentive, but the vast majority of kids who ever pick up a baseball are never going to go pro, are never even going to try to go pro, and just enjoy playing the game.

It's fair to say many, if not most, students don't enjoy writing the way kids enjoy playing games, but at the same time, the point was mostly never mastering the five paragraph thesis format anyway. The point was learning to learn, about arbitrary topics, well enough to the point that you could write a reasonably well-argued paper about it. Even if a machine can do the writing for you, it can't do the learning for you. There's either value in having knowledge in your own brain or there isn't. If there isn't, then there never was, and AI didn't change that. You always could have paid or bullied the smarter kids into doing the work for you.


> they could just offload to a pitching machine

Sure, but watch out for the game with a pitching machine, a hitting machine, and a running machine.

I do think there is a good analogy here - if you're making an app for an idea that you find important, all of the LLM help makes sense. You're trying to do a creative thing and you need help in certain parts.

> You always could have paid or bullied the smarter kids into doing the work for you.

Don't overlook the ease of access as being a major contributor. Paying $20/month to have all of your work done is still going to prevent students from using it. Paying $200/month would for sure bring the numbers of student users near to zero. When it's free you'll see more people using it. Just like anything else.

Totally agree with your main points.


The five paragraph thesis format isn't about learning to learn, it's about learning how to format ideas.

Just learning a thing doesn't mean you can communicate it


So maybe if there isn't a perceived value in the way we learn, then how learning is taught should maybe change to keep itself relevant as it's not about what we learn, but how we learn to learn.


> I refused to learn geography because we had map applications

Which is ironic, because geography isn’t about memorizing maps


Funny you should say that. In Sweden, to get good grades in English you have to learn lots of facts about UK, like population, name of kings and so on. What does that have to do with english? It's spoken in many other countries too. And those facts change, the answers weren't even up to date now...


That’s odd. Outside of reading comprehension assignments I never had any fact memorization as part of any language course.

Perhaps they changed the curriculum since the 90s


Yes, I was very confused when daughter came home with some bad scores on a test and couldn't understand what she meant. I had to call the teacher to get an explanation that it wasn't history lesson, it was english lesson... Really weird is just not covering it.

Swedish schools gets a makeover every time we change government. It's one of those things they just have to "fix" when they get to power.


Some parts of it were though.


Almost all parts require it, but none are about it. That's how background knowledge works. If you can't get over the drudgery of learning scales and chords, you'll never learn music. The fact that many learners never understand this end goal is sad but doesn't invalidate the methodology needed to achieve the progression.


It would be interesting to test adults with the same tests that students were given. Plus some more esoteric knowledge. What they learned at school could then be compared to see new information that they learned after school... as well as information, skills that they didn't use after school. It may help focus learning on useful skills knowledge that people have learned... as well as information that they didn't learn in school that would be useful for them!


> That's how background knowledge works. If you can't get over the drudgery of learning scales and chords, you'll never learn music.

Tell that to drummers


As a drummer, you need to learn your scales and chords. It still matters, and the way you interact with the music should be consistent with how the chords change, and where the melody is within the scale.

Your drumming will be "melodic" if you do so


Not mention tuned drums


> We talked about "critical thinking", "creative problem solving", and "comprehension of complex systems" as the next step, but even when discussing this, how long will it be until more models or workflows catch up?

Either these things are important to learn for their own sake or they aren’t. If the former, then nothing about these objectives needs changing, and if the latter then education itself will be a waste of time.


There's so much dystopian science fiction about people being completely helpless because only machines know how to do everything. Then the machines break down.


The Machine Stops by E. M. Forster is another very good one:

https://www.cs.ucdavis.edu/~koehl/Teaching/ECS188/PDF_files/...

And re-skimming it just now I noticed the following eerie line:

> There was the button that produced literature.

Wild that this was written in 1903.


It's such an amazing short story. Every time I read it I'm blown away by how much it still seems perfectly applicable.


“The Feeling of Power” is excellent and should be mandatory reading in English classes from here on out.


Pump Six (by Paolo Bacigalupi) comes into my mind.


I think that the classic of the genre is "The feeling of power" (https://en.wikipedia.org/wiki/The_Feeling_of_Power).


> more education around basic models and how they work

yes, I think this is critical. There's a slate star codex article "Janus Simulators" that explains this very well, that I rewrote to make more accessible to people like my mom. It's not hard to explain this to people, you just need to let them interact with a base model, and explore its quirks. It's a game, people are good at learning systems that they can get immediate feedback from.


> How can a human be a facilitator and shepherd of the workflows in such a way that can complement the model and grow the human?

Humans must use what the AI doesn't have - physicality. We have hands and feet, we can do things in the world. AI just responds to our prompts from the cloud. So the human will have to test ideas in reality, to validate, do experiments. AI can ideate, we need to use our superior access and life-long context to help it keep on the right track.

We also have another unique quality - we can be punished, we are accountable. AI cannot be meaningfully punished for wrongdoing, what can you do to an algorithm? But a human can assume responsibility for an AI in critical scenarios. When there is a lot of value at stake we need someone who can be accountable for the outcome.


Actually, it shows the real problem about education... and what education is for!

Education is not a way to memorize a lot of knowledge, but a way to train your brain to recognize patterns and to learn. Obviously you need some knowledges too, but you generally dont need to be an expert, only to have "basic" knowledges.

Studying different domains allow to learn some different knowledges but also to learn new way of thinking.

For example : geography allows you to understand geopolitic and often sociology and history. And urban city design. And war strategy. And architecture...

So, when students are using LLM (and it's worst for children), they're missing on training their brain (yes... they get dumber) and learning basic human knowledge (so more prone to any fake news, even the most obvious)


I think there is a bit of a 3rd category as well:

1. What can tools do better now that no human could hope to compete with?

2. Which other tasks are likely to remain human-led in the near term?

3. For the areas where tools excel, what is the optimum amount of background understanding to have?

E.g. you mention memorizing maps. Memorizing all of the countries and their main cities is probably not very optimal for 99.999%+ of people vs referencing a map app. At the same time needing to pull up a map for any mention of a location outside of "home" is not necessarily optimal just because the map will have it. And of course the other things about maps in general (types, features, limitations, ways to use them, ways they change) outside of a particular app implementation that would go along with general geography.


I'm not sure I understand the geography point - maps and indexes have been around for hundreds of years - what did the app add to make it not worthwhile learning geography?


geolocation, search, path/route finding.

I don't really care to memorize (which was most of the coursework) things which I can just easily look up. Maybe geography in the south was different than how it was taught elsewhere though.


The correct answer, and you'd see it if folks paid attention to the constant linkedin "AI researcher/ML Engineer job postings are up 10% week over week" banners, is to aggressively reorient education in society to education about how to use AI systems.

This rustles a TON of feathers to even broach as a topic, but it's the only correct one. The AI engineer will eat everything, including your educational system, in 5-10 years. You can either swim against the current and be ate by the sharks or swim with it and survive longer. I'll make sure my kids are learning about AI related concepts from the very beginning.

This was also the correct way to handle it circa the calculator era. We should have made most people get very good at using calculators, and doing "computational math" since that's the vast majority of real world math that most people have to do. Imagine a world where Statistics was primarily taught with Excel/R instead of with paper. It'd be better, I promise you!

But instead, we have to live in a world of luddites and authoritarians, who invent wonderful miracle tools and then tell you not to use them because you must struggle. The tyrant in their mind must be inflicted upon those under them!

It is far better to spend one class period, teaching the rote long multiplication technique, and then focus on word problems and applications of using it (via calculator), than to literally steal the time of children and make them hate math by forcing them to do times tables, again and again. Luddites are time thieves.


> The correct answer, and you'd see it if folks paid attention to the constant linkedin "AI researcher/ML Engineer job postings are up 10% week over week" banners

This does not really lend great credence to the rest of your argument. Yes, Linkedin is hyping the latest job trend. But study after study shows that the bulk of engineers are not doing ML/AI work, even after a year of Linkedin putting up those banners -- and if there were even 2 ML/AI jobs at the start of such a period, then 10% week-over-week growth would imply that the entire population of the earth was in the field.

Clearly that is not the case. So either those banners are total lies, or your interpretation of exponential growth (if something grows exponentially for a bit, it must keep growing exponentially forever) is practically disjointed from reality. And at that point, it's worth asking: what other assumptions about exponential growth might be wrong in this world-view?

Perhaps by "AI engineer" you (like many publications nowadays) just mean to indicate "someone who works with computers"? In that case I could understand your point.


> We should have made most people get very good at using calculators, and doing "computational math" since that's the vast majority of real world math that most people have to do.

I strongly disagree. I've seen the impact of students who used calculators to the point they limited their ability to do math. When presented with math in other fields, ones where there isn't a simple equation to plug into a calculator, they fail to process the math because they don't have the number sense. Things like looking over a few experiments in chemistry and looking for patterns become a struggle because noticing the implication that 2L of hydrogen and 1L of oxygen create 2L of water vapor being the same as 2 parts hydrogen plus 1 part oxygen creates 2 part water, which then means that 2 molecules of hydrogen plush 1 molecule of oxygen create 2 molecules of water, all of this implying that 1 molecule of oxygen has to be made of some even number of oxygen atoms so that it can be split in half to make up the 2 water molecules which must have the same amount of oxygen atoms in both. (This is part of a larger series of problems relating to how chemist work out the empirical formula in the past, eventually leading to the molecular formula, and then leading to discovering molecular weight and a whole host of other properties we now know about atoms.)

Without these skills, they are able to build the techniques needed to solve newer harder problems, much less do independent work in the related fields after college.

>Imagine a world where Statistics was primarily taught with Excel/R instead of with paper. It'd be better, I promise you!

I had to take two very different stats classes back in college. One was the raw math, the other was how to plug things into a tool and get an answer. The one involving the tool was far less useful. People learned how to use the tool for simple test cases, but there was no foundation for the larger problems or critiquing certain statistical methodologies. Things like the underlying assumptions of the model weren't touched, meaning students would have had a much harder time when dealing with a population who greatly differed from the assumption.

Rote repetition may not be the most efficient way to learn something, but that doesn't mean avoiding learning it and letting a machine do it for you is better.


I remember seeing a paper (https://pmc.ncbi.nlm.nih.gov/articles/PMC4274624/) that talked about how physical writing helps kids learn to read later. Typing on a keyboard did not have the same effect.

I expect the same will happen with math and numbers. To be fair you said primarily so you did not imply to do completely away with the paper. I am not certain though that we can do completely away with at least some pain. All the skills I acquired usually came with both frustration and joy.

I am all for trying new methods to see if we can do something better. I have no proof either way though that going 90% excel would help more people learn math. People will run both experiments and we will see how it turns out in 20 years.


Times tables aren’t the problem. Memorizing is actually fun and empowering if done right.

But I agree that the “learning is pain” is just not my experience.


> geography

In Germany the subject is called "Erdkunde" which would translate to geology. And this term is, I assume, more appropriate as it isn't just about what is where but also about geological history and science and how volcanoes work and how to read maps and such.


How did the people who wrote the LLM and associated software do it when they had no such thing to "just look it up"?


Stackoverflow/stack exchange was a proto-LLM. Basically the same thing but 1-2 day latency for replies.

In 20 years we'll be able to tell this in a stereotypically old geezer way: "You kids have it easy, back in my day we had to wait for an actual human to reply to our daft questions.. and sometimes nobody would bother at all!"


yeah search in general, bulletin boards, shared knowledge bases, etc.


To be honest, AI is very much still in its discovery/explore phase.


Is that also the "F*** Around" phase?

IMO, the "Find Out" is at least 1-2 years away.


You can pretty much bypass this with residential proxies.


I've done the same for use in foreign currency exchanges. The adventure of reverse engineering protocols and finding security checks, etc was more fun than the actual accomplishment lol.


I upvoted you for the sentiment alone.


Makes sense since it's a sentiment based conversation.


I love the gaussian splatting that's going on. I also love the people pushing gaussian splitting and generative AI. I really feel there is something there but I'm not quite sure what yet. It's cool seeing this unfold, but I'm also worried it can turn into something like Photosynth, where it was a cool exercise but not much came out of it. I would love someone's input who is involved in this tech that could blue sky where it could be applied in interesting ways.


> like Photosynth, where it was a cool exercise but not much came out of it

It's hard to know what Photosynth could have become. It was shut down by Microsoft for unknown reasons. If it was open source it might have evolved and might still be around.

Gaussian Splatting has multiple implementations on both the training/generation side and the rendering side.


100%. I really think Gaussian Splatting as a method can offer a lot, but it's also not powerful enough (yet) to be useful. Specifically editing and relighting GS models is rather clunky.

I think SplatGallery is an exploration of how people use the technology and what can it be useful for today and what is the next step.


Gaussian splatting has been my obsession lately =)

Hard to believe, but the main technique is just over a year old (built on the shoulder of giants). This is the seminal paper for it [1] and here's a three-hour video so you really understand it [2] (and great how-to-really-read-a-paper-if-you-are-serious video). So what's great is a bunch of labs saw that and started building on it, so in the last three months there have been so many great improvements. And so much work done openly!

This has a great video roll [3] of some recent work, including use in construction and forestry.

If you have an Apple Vision, try MetalSplatter [4] and you will get an idea of how OMFG this stuff is.

We are in such a rich time of new compression knowledge and reality into weird representations!

I've been trying to evangelize it, but it takes so much foundation to understand how interesting it is. Many people (even software engineers and computer scientists) don't understand traditional 3D rendering pipelines and meshes/triangles and lighting. So you have to explain that, then concepts of spherical harmonics, gaussians with affine transforms, and the miracle that happens when you sample millions of them using raycasting at 800 fps. The neural network approaches go at 2fps.

In ideating and communicating the possibilities, I try to focus on the workflows...

Capture: we can sample lighting and depth of scenes with our phones or fancy cameras or drones; we can also use photogrammetry (algos/math to create depth fields from photos). This isn't specific to 3DGS, but 3DGS empowers it to be useful. So we have this tech where we can more easily capture objects and environments, edit them, and play it back. 3D captures has been around a long time (e.g. I worked on the Immersion Microscribe in the mid0-90s [4] and this is what machine vision used to be about), but we didn't have techniques to infer structure from the point clouds.

Processing: magic math turns this into a bunch of Gaussians with a transformation matrix (bell curves of different shape floating around) which represent the structure. Literally the components are spherical harmonics (color), density (alpha), variance, translation, rotation. Scenes will have many hundreds of thousands to tens of millions of them.

I tend to mention how LLMs capture aspects of knowledge into a big sea of weights that get computation applied to them and that this is similar very abstractly; and researchers have worked with Neural Radiance Fields. But what's great about Gaussian+Transform is that this is you can actually get an intuition of what's going on -- and the editors let you edit with the gaussians and filter/prune them. You can't do that with an NN (have intuition and direct manipulation).

Rendering: those objects are sampled and drawn on at interactive rates. You can represent scenes or objects. You can commingle them with traditional 3D assets. Works on recents phones very well. So this tech is broadly available now for playback.

The thing about it is that they lack structure. It's can be very ghostly and ethereal. So I think in the near term, this tech will be fantastic for customization/personal object capture/integration with scenes... but not for simulation but for human communication. As noted, the forestry and construction videos hint at this. Also product displays in website -- here's a Shopify plugin [6].

I have so much to say about it, but will stop here =)

[1] https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/

[2] https://www.youtube.com/watch?v=xgwvU7S0K-k&t=1s

[3] https://www.youtube.com/playlist?list=PLrhy9mGYkm0aZnjL-4OpO...

[4] https://apps.apple.com/us/app/metalsplatter/id6476895334

[5] https://revware.net/microscribe-portable-cmm/microscribe-i-p...

[6] https://bitbybit.dev


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: