It’s probably most important to take a long term view of asset allocations, stick to the plan, and be tax efficient. Once you cover that off, the tactical positions can maybe help a touch.
Diversification is the bedrock of portfolio management, meaning owning a range of assets that collectively perform acceptably under a range of scenarios. But it’s generally not sexy, not something you catch individuals bragging about - avoiding permanent loss of capital.
Think about what risk you’re willing to take, in the context of your job/career prospects, current investments etc.
These are things for you to decide. Wouldn’t trust anyone who says buy x without knowing your individual circumstances.
What I can say is there are consistent patterns for many successful investors, and the media will tend to focus on the outliers / lottery winners, which by definition are difficult to emulate / replicate. Be wary of survivorship bias and the narrative fallacy.
In a productive way, this view also shifts the focus to improving the system (visibility etc), empowering the team, rather than focusing on the code which broke (probably strikes fear in the individuals, to do anything!)
If we think about applicability to AI (as the footnote suggests was on the authors mind)… I found myself thinking about the motivations and incentives that existed at the time, to understand why.
Ships - my read is that major naval powers essentially reduced the downside to owners of ships (making them responsible) while giving the owners salvage rights (“curing” the problem of a wreck which may be an impediment to the passage). That seems to make sense if you put your “I’m a shipowner” hat on. Balanced by the governments also saying if your ship sunk and someone else recovers it, they get some cut bc hey, you didn’t recover it.
And the others seem largely about modern governments appeasing religious and indigenous groups. And it’s interesting that this acknowledgement seems to be part of a broader solution (Trust, ongoing governance etc)
The first seems more financially motivated (capping downside and clearing shipwrecks). The latter seems more about protecting a natural resource/asset.
So then you think about who is leading AI and what their incentives may be… Convenient that we do have modern companies that limit liability, do you just use that structure (as they are) or do they seek to go further and say this Agent or LLM is its own thing, as as a company I’m not responsible for it.
maybe that is convenient for the companies, and the gov in the countries leading the charge…? Looks more like ships than rivers..?
I think AI personhood will come about via another path. That is the same as Animal rights.
We humans in general, suffer, when we perceive animals suffering. Its an entirely emotional response. Humans are developing emotional attachments to LLMs. It follows, to an extent, that people will try and shore up the rights of LLMs simply to assuage their emotions. It doesnt actually matter whether or not it can feel pain, but whether it can express pain in a way that causes a sympathetic emotion in a person.
After a ~20 year break from first person shooters I’ve recently played Call of Duty Multiplayer and what struck me was how many superficial skins or various rewards were visible to others - it seems to steer the player to accumulate these things (through play or $), to show others in the game.
And the odd pumpkin heads (literally players with pumpkins as heads) running around coinciding with Halloween.
Very different than Counter Strike circa 2005.
Roughly the same mechanics but much more commercialised, playing to the psychological weaknesses of players.
Yup. The central argument seems to include an assumption that LLMs will be the same tomorrow as today.
I'd note that people learn and accumulate knowledge as new languages and frameworks develop, despite there being established practices. There is a momentum for sure, but it doesn't preclude development of new things.
Not quite. The central argument is that LLMs tomorrow will be based on what LLMs output today. If more and more people are vibe-coding their websites, and vibe-coding predominantly yields React apps, then the training data will have an ever larger share of React in it, thus making tomorrow's LLMs even more likely to produce React apps.