Hacker Newsnew | past | comments | ask | show | jobs | submit | lokicik's commentslogin


Location: Europe

Remote: Remote only

Willing to relocate: No

Technologies: Python, TypeScript, Go, Node.js, Express, LLMs, React, Next.js, Firebase, Docker, GraphQL

Résumé/CV: lokmanefe.com

Email: hello@lokmanefe.com

GitHub: https://github.com/lokicik

LinkedIn: https://www.linkedin.com/in/lokmanefe/

Early-career fullstack engineer shipping end-to-end features across frontend, backend, and AI systems. Comfortable owning production work remotely.


I built a small utility library for generating placeholder text from custom corpora. It has no dependencies, works in both Node and the browser, and supports deterministic output through seeding, which is useful for tests and reproducible fixtures.

GitHub: https://github.com/lokicik/placetext

npm: https://www.npmjs.com/package/placetext

Main features:

• zero dependencies

• corpus based text generation (multiple built in corpora)

• deterministic mode for consistent test output

• TypeScript implementation

• ESM and CommonJS builds

• small footprint

Happy to answer questions or hear suggestions.


How will the llm-driven systems coordinate with RimWorld’s existing AI and job scheduler without causing performance issues? won’t the AI system you’re planning to implement end up being too slow?


In my own system in unity, I used LLMs as merely an orchestrator and left Behaviour Trees (https://arxiv.org/abs/1709.00084) as the main core of NPC behaviour. I assume this question is directed at the NPC engine part of the mod, "FelPawns" as of right now.

Since my goal is to manupilate the NPC behaviour ingame directly, and use LLM for more than just roleplay chatbot purposes performance is indeed and issue. I think the hardest part is to bridge the LLM and Rimworld internal scheduler seamlessly. I am following some articles loosely , like Generative Agents (https://arxiv.org/abs/2304.03442) to take inspiration (and innovate/iterate on it). Nevertheless, my goal is not to make a better version of RimWorld pawns, but just a more immersive version and get to try some methods I wanted to try along the way. Taking players decision away and delegating to a machine that is not good at long term planning will never be a viable choice in RimWorld, in terms of meta-gaming. However, if we focus on social aspects of the pawns and leave the performance-sensitive parts to the game itself I think we can hit a pretty nice middle ground.


assuming the LLM calls up specific events does it differ much from using a random number to decide which event to call?


If we are talking about the Quest Generation part (which is still pretty early nin development), it's not just calling pre-existing RimWorld map events (Like Raids, Animals join, Ambrosia Sprout etc).

Instead, I am trying to create a sort of lego-bricks kind of approach for the LLM. Basically, I provide it a list (sometimes, a decently huge list) of building blocks for categorized as "Action", "Condition", and "Reward" nodes, each interface also inheriting from the "Node" interface and LLM creates this nodes which are formatted and created at a factory class at runtime, leading to a Tree like approach that fires a quest, tracks it at runtime if possible and if not possible LLM will interpret if the quest is complete (sadly, I will have to add a "Scan if complete" button somewhere, to save on performance).

Of course this approach is prone to lots of refactoring and design changes, since I am learning by doing. Just yesterday, I was reading about level generation using LLMs (https://arxiv.org/abs/2407.09013) which goes into a more detailed discussion at chapter 5, especially at 5.2

LLM also will also have the context about your colony, your recent quests and recent events, map situation etc so it will certainly be more immersive than just randomizing the events. After all, even the default storytellers (even Randy) plays by some rules, and is not totally random.

I am also trying to keep the core architecture pretty abstract, so I and maybe other developers can just patch it and easily implement their own "Nodes" (maybe I should start calling them leafs...) and this sorta adds some development overhead too. If I can find some time, I will share some examples about the generated quests (both raw response from LLM and processed result) int the future.


I love the idea, one of the narrow areas where i wish the tech was being used more instead of less. but thinking about how if i ask LLMS to list 10 good cities to visit , i'll almost always get Paris , New York and Tokyo in the list.

So your item/trait generation may end up with some similar but non identical items/traits when presented with similar scenarios.


That's definitely true. LLMs, Especially small ones tend to give not so random responses. This can be mitigated by context a little bit (fortunately, RimWorld already has some lore building built in) but still it will never be completely unique. Without trying I cant say much tbh, but its promising nevertheless.


Good luck either way :)


Location: Europe

Remote: Yes, remote only

Willing to relocate: No

Technologies: Python, TypeScript, Go, Node.js, Express, LLMs, React, Next.js, Firebase, Docker, GraphQL

Résumé/CV: lokmanefe.com

Email: hello@lokmanefe.com

GitHub: https://github.com/lokicik

LinkedIn: https://www.linkedin.com/in/lokmanefe/

---

I love it when people enjoy the things I build.


I’ve been looking for a job in tech for a while. the process feels broken.

- 50 interviews, 1000+ applications → 4 response -> 0 offers (just pure ghosting, even for unpaid work).

- multiple companies asking for “10+ years” in tools that are 5 years old.

- one role asking for frontend, backend, fullstack, and devops basically 4 engineers in a trench coat.

it feels like the system optimizes for keywords, not skills. sometimes i wonder if job posts exist just to harvest free take-home projects or bug bounties. (yes, that happened.)

curious: how are others navigating this market? does ghosting ever end, or is this just the new normal?


> the process feels broken.

Yes and no; what do I mean with that...yes, it looks like the system it's fundamentally broken and at the same time, no it's not because it has always been this way, thus the expected output.

The problem with it right now is that history repeat itself, as the current market symptoms are the same (more or less) as those back in 2000 when dot com bubble was about to burst!

They were asking for unrealistic stuff and you could tell from the amount of money poured in the tech industry that something was completely off, which eventually confirmed everyone's suspicion.

I have been looking for a job myself for a while now and let me tell you, it's a complete mad chaos out there, especially with the AI frenzy everyone and their mother is talking about.

Hold on tight, don't stop and keep on looking for anything, until the storm calms down.

Good luck!


Location: Europe

Remote: Yes, remote only

Willing to relocate: No

Technologies: Python, TypeScript, Go, Node.js, Express, LLMs, React, Next.js, Firebase, Docker, GraphQL

Résumé/CV: lokmanefe.com

Email: hello@lokmanefe.com

GitHub: https://github.com/lokicik

LinkedIn: https://www.linkedin.com/in/lokmanefe/

---

I love when people enjoy the things I build. The other day I made a realtime Kahoot-style game, 25 of us played together and it was so much fun. I dream of creating moments like that for millions and I’m especially excited about system design, making things scale and stay reliable at that level. I can quickly learn and adapt to any technology if it helps me build better experiences.


this guy really turned his note taking app project to a new look on context engineering


Does this mean that the next gpt model could be online, like Gemini?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: