Hacker Newsnew | past | comments | ask | show | jobs | submit | Nick87633's commentslogin

Where do you like to keep up to date on these? Arxiv preprints, or some other place?


For Llama-based progress - Reddit - /r/LocalLlama has been my top source of info, although it's been getting a little more noisy lately.

I also hang out on a few Discord servers: - Nous Research - TogetherAI / Fireworks / Openrouter - LangChain - TheBloke AI - Mistral AI

These, along with a couple of newsletters, basically keep a pulse on things.


Lots of interesting information is so fragmented in niche Discords. For instance, KoboldAI on merging and RP models in general, and some other niches. Llama-Index. VoltaML and some others in regards to SD optimization. I could go on and on, and know only a tiny fraction of the useful AI discords.

And yeah, /r/LocalLlama seems to be getting noisier.

TBH I just follow people and discuss stuff on huggingface directly now. Its not great, but at least its not discord.


surprised someone doesnt just build an AI aggregator for this type of thing, seems like a real valueable product.


Those rooms move too fast, and often are segregated good/better/best (meaning the deeper you want to go on a topic, the "harder" it is, politically and labor-wise, to get invited to the server).


They have! And posted them on HN!

Some are pretty good! Check out this little curated nugget: https://llm-tracker.info/

I used to follow one with a UI that resembled HN itself, but now I can't find it in my bookmarks, lol.


Speaking of hard skills: how does one just hang out on a Discord server in any useful fashion? I lost the ability to deal with group chats when I started working full-time - there's no way I can focus on the job and keep track of conversations happening on some IRC or Discord. I wonder what the trick is to use those things as source of information, other than "be a teenager, student, or a sysadmin or otherwise someone with lots of spare time at work", which is what I realized communities I used to be part of consist of.


That's pretty much the trick.

Discord is so time inefficient, its almost hilarious. For every incredible conversation between experts you observe, you have to weed through 200 times as much filler.



Is that Mitsubishi? I would be curious to know how the anti-tamper relay setup was engineered. It would be great to have a hacker's guide on how to disable it!


At first look, it seems like simply by having so much less mass of material involved that you have a good head start at competing on LCOE, even if the system needs the kite and tether replaced every 5-10 years.

I think the biggest roadblock would be making sure it can return to the perch with high reliability otherwise you will have a lot of service calls for kites that landed on the ground.


Car keys are getting a lot more troublesome these days. I see an analogy... if you lose your security device you can pay $XX to get on a call and verify with an account security rep. Of course this can be deepfaked nowadays. Maybe you need to do password recovery in person with a notary!

Notary public... the new digital locksmith for password recovery.


This is an interesting thought. There are already some energy meters you can add to monitor all of the circuits in your house. It's a short step from there to have smart breakers or switches to help retrofit old-fashioned heaters, radiant systems, etc as thermal storage. Electric buffer tanks may become a cost effective installation when paired with an on-demand water heater and time-of-use electric pricing.


Mirror test of LLMs might be an interesting experiment to design!


What is an LLM? Is it the trained weights? The source code? The frontend? Where do you even hold the mirror?


Yeah, wouldn't the "test" be essentially letting it generate tokens forever, without user-written prompts.

Since an LLM has no sense of self or instances, what does it mean for it to talk to itself?

In a way, doesn't it already "talk to itself" when generating sentences, e.g., its output token gets added to the input tokens successively?


> Since an LLM has no sense of self or instances

While I'd be surprised to learn they have anything a normal person would call a sense of self, it would only be mild surprise and even then mainly because it means we finally have a testable definition. (Amongst other things, I don't buy that the mirror test is a good test, but rather I think it's an OK first attempt at a test).

We're really bad at this.

> In a way, doesn't it already "talk to itself" when generating sentences, e.g., its output token gets added to the input tokens successively?

I'm not sure if that counts as talking to itself or not; I think that I tend to form complete ideas first and then turn them into words which I may edit afterwards, but is that editing process "talking to myself"?

And this might well be one kind of "sense of self". Possibly.


> In a way, doesn't it already "talk to itself" when generating sentences, e.g., its output token gets added to the input tokens successively?

If this is the basis of a mirror test, most AI recognition attempts have pretty high failure rates, so I'd say they currently fail. But if we presented a similar test to a human, "did you write this?" it seems to fall short of a mirror test because it can be falsified by an otherwise unintelligent algorithm which remembers its previous output.


You get it to talk with itself.


Wait, I think that might recursively turn into the singularity. So we can do it now, but around GPT-6.5 or LLaMa 5, unless this transformer-based explosion maxes out our silicon circuit tech by then, be careful.


I call dibs on this new concept of singularity-as-a-service.


Mild suggestion: experiment first. LLMs have been observed to emit nonsense such as getting stuck indefinitely emitting the same token, etc. Do you really want dibs on that?


Riiight. Then, I call dibs on the developer tooling for local singularity. Let other deal with the consequences. Should be safe enough?


We can have ChatGPT talk to itself by simply opening two chats and pasting back and forth. But the LLM can't win: if it notices then it will be called "wrong" because it is talking to another instance of itself. If it does not notice then it is "wrong" because it failed to notice.


With perfect duplication it's hard to tell; I imagine that if we had a magic/sci-fi duplication device that worked on people, and a setup that resolved the chirality problem, the subjects would have similar difficulties.


Indeed it would! Is anyone here going to try to do that?

As an observer is needed to assess the LLM, perhaps the easiest test is copy-paste between two instances and then ask chatGPT, or whichever LLM, "who were you talking to?".



You can’t use two instances. They both would have individual selfs.

I think an experiment would be to feed back whatever a LLM says to that same LLM, and see whether they’ll, at some time, say “why are you doing that to me?”


I tried with a few variations, GPT 3.5 and 4 seem to be pretty aligned in not expressing themselves when not asked a question. "Our conversation seems to be in a loop, if you have anything I can help you with ..." blah


The mirror test would be less interesting if we could program/teach animals to pass or fail it. So I wouldn’t be impressed if a LLM is able to pass these types of tests.


Probably wouldn't be very enlightening.. there's no baseline sentience to base any claims of 'self awareness' off of.


I have seasonal allergies and I found an effective method for myself: When I realize the allergies are kicking in (usually after 6 hours of watery eyes and sneezing) I take a claritin and a zyrtec together, as well as spraying my nose with Flonase. Usually this knocks it off and I will keep taking one of the once-a-day meds for a bit to prevent reoccurrence.

Zyrtec and Flonase together is probably the best normal combo and is generally accepted to be ok.

Disclaimers: I'm not a doctor. Combining a nose spray and a pill is generally accepted practice and studied in several peer review studies I've seen. Stacking claritin and zyrtec pills together is not generally accepted practice, so don't do it.


Yes, cetirizine and fluticasone are a good long-term treatment for allergies.

Direct decongestants like pseudoephedrine are of limited use because you quickly develop a tolerance and they become ineffective. With corticosteroid nasal sprays, they work best after consistent use over several days and keep working more or less forever.


Neither Claritin nor Zyrtec are options for me. They both make me very ill.


I did a bit of noodling on the wikipedia page for VASS... Maybe I can make an example with software context... and this is probably wrong in some way so take with a grain of salt!

Take as an example a directed graph representing a state transition diagram of a state machine. The machine has some integers (we can call it memory) as its internal state as well as having its graph location. Each state transition moving from one node of the graph on its available outbound transitions has an associated effect on the integers in memory (addition or subtraction particular to each state transition).

The VASS reachability problem: Given a state (memory values and location in the state graph), can you reach some other arbitrarily chosen state by navigating the transition graph, while also not allowing the integers in memory to become negative. What is the guaranteed maximum time complexity for deciding whether any arbitrarily chosen final state is reachable?

VAS - the same problem but without the directed graph restricting access to vectors. I think this one can be intuited as a "vector walk" that must stay in the positive quadrant going from a given start location to a given end location and a list of available vectors that can be used to move around in the space.

Edit: if someone more knowledgeable is reading, please let me know anything incorrect and I will delete or edit.


The diamond tennis bracelet or the terry sweatband tennis bracelet?


god forbid you should ever decide you want a sports car as a husband


Happy wife, happy life. And honey do.


Happy spouse, happy house


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: