I’ve been using something similar called Twinny.
It’s a vscode extension that connects to an ollama locally hosted LLM of your choice and works like CoPilot.
It’s an extra step to install Ollama, so not as plugnplay as tfa but the license is MIT which makes it worthwhile for me.
If you started that chart in 2020 it would be pretty eerie. If I worked at motley fool I’d regurgitate this as a blog post, Reddit dd thread, and so much more.
But really if I knew finance I’d be interested to see - since who’s hiring is posted the first day of the month, using month over month/etc post volume to inform trades.
> Several high profile companies have already announced layoffs in recent weeks
Which companies are these exactly?
What do long tenured developers see in this latest new cycle? Are Bloomberg just stirring up drama is or there a strong possibility of a genuine, long lasting drought in developer salaries?
Seems I missed the "Great Resignation". I've stayed at the same company for the last few years, making a mediocre salary because they let me take a "senior" role with fewer YEO and I'm buildings skills faster than if I'd started over at another company.
On the other hand, I know people who've moved for a significant raise to companies that are more sensitive to their stock performance, with less job security.
Seems pretty simple. When we create upper and lower boundaries to some score, people with lower scores have more space to overestimate and those with higher scores more space to underestimate, causing the perceived score to trend towards the mean.
I think there's both a component of numbers and psychology here. If the dispersion in perceived score caused by inaccuracy is wide enough to touch the bounds, it will force a trend towards the mean. This effect is possibly exacerbated by a tendency of perception to stray from "extremes", so subjects with a score near the edges will trend to the mean more strongly as they are unlikely to rate themselves the very best or very worst.
This seems pretty simple to correct, so I'm skeptical that nobody has done so yet in these experiments. If true, it's an equally interesting oversight as the Monty Hall problem. The basic premise is that the structure of an experiment will naturally nudge randomness in a particular direction, and we need to adjust for that in the analysis. Everyone who does this type of work should know this.
In a simplified experiment where we give people a 3 question quiz, those who got 2 questions right have one overestimation option, 3, and two underestimation options, 0 and 1. So it's very easy to adjust for autocorrelation by checking if a large group of 2-scorers underestimate more than twice as often as they overestimate. Then we see how their tendencies compare against 1-scorers and how they deviate from naturally overestimating more than twice as often as underestimating.
I haven't reviewed these types of papers, but if nobody made even that basic adjustment in their analysis, how many others have been missed in experiments like this?
Unless I missed something, this article doesn't explain WHY random data can result in a Dunning-Kruger effect. The relationship between the "actual" and "perceived" score is a product of bounding the scores to 0-100.
When you generate a random "actual" score near the top, the random "perceived" score has a higher chance of being below the "actual" the numerical below is larger than the one above, and vice-versa. E.g. a "test subject" with an actual score of 80% has a (uniform random) 20% chance of overestimating their ability and an 80% of underestimating it. For an actual score of 20%, they have an 80% chance of overestimating.
A person with an actual score of 80% will probably have enough confidence in his or her abilities due to experience that they will tend not to rate themselves low. Imagine being a graduate student asked how high (s)he would rank. They would not rank themselves as low as they might have when they were sophomore students. They would probably rank within 20% of their actual score, which is what the final graph in the article shows; professors have enough experience to be able to self-assess themselves better than less experienced subjects can.
As explained in the article, the reason is autocorrelation. Basically the y axis is correlated to the x axis because the y axis is actually x + random noise. The dunning kruger graph is then a transformation of that data - still subject to autocorrelation.
When I called the number, I got the expected intercept message: "We're sorry, your call can not be completed as dialed..."
It gets even better... If you search for the two phone numbers on that page together, you'll find them on a whole bunch of sites, all presumably fake businesses:
It gets even better. On the front page of Taylor Wilson Smith it says
> Davis Robbins is a leading independent international law
When they were making their fake Taylor Wilson Smith site someone apparently had a copy paste error and included some text from their fake David Robbins site [1].
The fake Taylor Wilson Smith firm and the fake Mason Donald King firm both say they are at One Penn Plaza, New York, NY 10119. It is easy to find the tenant list for that building and there is, or course, no tenants with either of those names.
Another thing they botched when making up these firms is that none of the fake attorneys at Taylor Wilson Smith are named Taylor, Wilson, or Smith. Similar for the fake attorneys at Mason Donald King.
Davis Robbins, which has the same fake phone numbers as the other two, is at least at a different address, 12 Fremont Ave, Staten Island, NY 10306.
That's not even an office. It's a single-family house in a residential neighborhood.
Like the other two fake firms, none of their fake attorneys match the names of the firm.
Also, how many real law firms specialize both in copyright litigation and divorce? Yet the TWS, MDK and DR firms all do - and they just happen to have exactly the same list of six Practice Areas. The three sites were all hastily cloned from the same template. Not very convincing at all.
It really doesn't matter how obvious it is if you follow the easiest rule of not getting scammed:
- If anyone initiates contact with you, don't trust any claims they make about their identity.
If you only trust real law firms, verify that independently with whatever authority determines which law firms are real. People need to stop using "can make a professional-looking website" as a proxy for "not a scammer".
What characteristics signal this to you? I took a glance at the lawyers'* photos and can't easily determine that they're AI generated. I probably wouldn't give it a second thought if I didn't know ahead of time that they were generated.
It's the typical GANS face layout, with a blurry background, eyes centered and cropped to the face. It's certainly possible those are could be real people, but in my experience law firms usually have upper-body shots of the lawyers with their arms folded, or standing together as a team or with a client.
I wouldn't catch these at first glance, but the older gentleman specifically stands out to me with the
1. tuft of hair above the right eyebrow
2. teeth far offset from center
3. soap-bubble colored noise around the hair features
These aren't unusual on their own (except #3 maybe) but all together they make the photo seem fake.
I'm not great at this but in general
- Eyes exactly centered in the middle of the photo
- Earlobes/ears are different, e.g. attached vs unattached lobe on either side
- Boundaries of hair are confused/fuzzy
In each scam group, a member of team specialises in, making websites, for instance. Others are good at phishing, talking like a call center worker, and the list goes on. The info on how todo this is sold on dark web. So those scammers likely didnt even build these fake websites, they bought the templates.
It’s an extra step to install Ollama, so not as plugnplay as tfa but the license is MIT which makes it worthwhile for me.
https://github.com/twinnydotdev/twinny