I think one of the biggest challenges for a team lead is understanding the team's priorities, followed by identifying and acting on the leading indicators of success towards those priorities.
By understanding priorities I mean: the tech lead has to be in sync with management (of the team and often other leaders of the org) about what needs to get done and what can be cut if there isn't enough bandwidth. Weak tech leads in my experience don't have a sufficient grasp on changing priorities, which results in the team working on things that don't get rewarded properly / don't pay off and/or loading up the team with work that could have been deferred. Some of this is the manager's job, but often it falls to the tech lead to estimate the true technical 'size' of what is being asked.
By acting on leading indicators of success, I mean: the tech lead will ideally not be doing the majority of execution on a well-staffed team. They should be doing some execution work to ensure the codebase is sufficiently easy to work in etc, but most importantly they need to know how to figure out whether or not something is on track without sinking too much of their time to do so. Setting up milestones and some target date helps with this, but it's often uncomfortable to do that with folks that were recently your peers (it still needs to be done).
I don't have books or other resources, but this has been my experience as I transitioned into similar roles. I also think my experience may skew more towards a 'manager-tech-lead' than a pure tech lead, so take that with a grain of salt. Good luck!
This is an excellent article! As a team leader, the most critical responsibility is to define the company's direction clearly. Only by creating substantial value for the company will your team become indispensable,and this value isn’t necessarily tied directly to revenue; it could be efficiency gains or other strategic advantages. Throughout this process, it’s essential to continuously align with key stakeholders to prevent misalignment and quickly assess how initiatives impact the company’s product value, which helps prioritize efforts. These are insights you won’t find in books or courses,they must be earned through hands-on experience.
I concur with this nice write up. Your job is now to get leadership's priorities done, identify what will achieve that goal and steer the team towards it.
My advice would be to establish a good relationship with your stakeholders, understand what they want from the team. You are now the go to person in the company representing your team. You should be always up to date with what's being done work wise by your team.
Stakeholders will give you a new point of view of where your team is in the environnement. Use this POV to reflect on the usefulness of work done by your team.
Also, I don't believe a team lead should overprotect it's team, as it blurs' one view and can burn some bridges. Your team can definitly fuck up and you should tell them when they do.
Hm, I’m not sure what you saw in the post before my edits, but I think this answers “did AI motivate / help discover the breakthroughs we saw 20 years ago?” which I definitely agree would be a “no”.
Either way, before and after my edits the intent was to identify areas in which distributed systems researchers moved their focus to support areas such as (but not exclusively) AI.
The question comes from me supposing that “pure” distributed systems research has slowed.
This is the sort of trivialization attitude that I’ve come to associate with people who only care about the “big picture”. It is really irksome if you work on the lower-level stuff that (in some sense) makes the big picture possible.
I think there is a point here that user-facing innovation stagnated and OpenAI helped break that, but it’s wild to me that there is no acknowledgement at all of the giants whose shoulders they stand on. Although I guess that’s what he meant about the arrogance…
I would be curious to understand the "How to Prompt" section more. As someone who does not interact with LLMs regularly I have no idea why this looks like a templating language.
Would anyone be able to explain what's going on in that section or point to resources that explain what the goal is / why this looks so programmatic?
It looks programmatic because this is the simplest way to describe what parameters and sentences structure the authors have found works well. I find its best to remember that LLMs are infrence engines - they are stoy telling machines. LLMs that work via instruction are building probible narrative, not responding to their instructions like orders but as a writer being asked to continue a scrip.
Basically it's jQuery mobile version of reddit, and it's super fast compared to whatever framework they are using for the regular mobile site.
I just want to read posts, comments and answer them, nothing more, the nag screens telling me to download an app? the loading screens or whatever, just not... why is that i.reddit version so fast and the regular one so slow?
This was the main reason I moved my money out of a bank account early during the rate hikes.
I was waiting around for a better rate, but even when T-bills were being offered at >2% annually, banks were offering less than a percent. Even competitive savings accounts seemed sluggish. Money market funds were an easy way to get similar rates to T-bills without actually buying them myself or waiting for banks to get the message.
Past losses aside, the press release says that there are about $180B in deposits with the bank holding about $210B in assets. Assuming the FDIC liquidates and restructures the bank, I don’t see why deposits could not be made whole.
If there were fewer assets then deposits, then yes the 250k+ accounts are probably out of luck.
The "assets" are actually held-to-maturity securities (bonds) that are yielding less than the risk free rate. Who would want to buy a bond that yields 2% when you can buy treasures that yield 4%. So while they might have $210B in paper assets but there's no chance they will be unable to unload them without taking a loss, putting the bank upside down.
> Who would want to buy a bond that yields 2% when you can buy treasures that yield 4%.
Whatever bank/organization that wants to have SVB's customers, probably. If an even bigger bank comes in, one which can take on those lukewarm assets for a decade without risk, then they can immediately position themselves as the "new SVB" and get a bunch of VCs and startups as customers. I assume that they could stand to profit some from such an arrangement, but I'm not a banker, so maybe not?
And restructuring tends not be stay “gov owned” - the government assumes ownership to stabilize the market then tries to sell off the business to another business. Often there’s some incentive to assume a massive amount of customers and assets. The gov may even take on the intermediate loss (the FCID is an insurance agency after all).
Yeah, I can see why in general the government wouldn't want to hold on to assets, but bonds are kind of a special class of asset in that they do eventually mature and will naturally just be something they don't need to manage (within a relatively short time period too). If you expect that the sell off could take years to complete, some of those bonds will be halfway to maturity by the time they're sold.
If I'm the FDIC and I have the opportunity to return 100% of the funds to depositors at the cost of just holding on to a bond for a few more years than I otherwise would, that seems like a tradeoff I'd make to stabilize a lot of companies. (I'm of course biased here)
Will those assets still be worth $210B as the days tick by? I'm not a macro financial analyst, but I have to imagine trying to liquidate $210B of bonds, stocks, etc. will cause at least some of that value to fall – that's a big number.
Once the FDIC kicks in they can sell off to a different bank which can absorb them without touching the open market. Alternatively the FDIC can guarantee the bank for the duration necessary to sell assets slowly. They could likely sell the bank as a whole to another bank if assets>liabilities without too much disruption.
If someone well capitalized buys the bank, then they don't need to liquidate. The bonds aren't worthless, they just trade much lower now that interest rates have risen, however if you can wait until they mature you will get your money + interest.
We have alerts set up that expect metrics for things like "orders placed" to always be happening at expected rates.
When datadog has the very rare outage that breaks ingestion, all of our alerts would normally go off because we aren't seeing the expected volume of "orders placed" and open up StatusPage incidents for us and our customers, call the pagers and get folks working.
But instead they automatically stop any false alerts that would normally alert here because of their outage. Saves me a lot of headaches.
It is stuff like this why I am happy paying the Datadog bills. Even their outages are good.
This is convenient behavior up until you actually have an incident that coincides with theirs, in which case it becomes catastrophic because you had no idea that outside vigilance was required on account of their ingestion downtime. Not sure why you would laud this. Is it possible to opt out?
In your scenario you would have no logs etc until the DD incident resolved.
Opting out would just mean all your missing data alerts fire every time Datadog has an incident and you would then check, see that everything is missing, and then identify the cause as the Datadog incident.
Its much better to have them handle it and auto-mute the impacted monitors than communicate to my customers every time about false alerts saying all our services are down.
> Opting out would just mean all your missing data alerts fire every time Datadog has an incident and you would then check, see that everything is missing, and then identify the cause as the Datadog incident.
You are missing the last step, which is that, knowing alerts are down, you can actively monitor using other tools/reporting for the duration of their incident.
And why would you have no logs? Even assuming you ingest logs through Datadog (they monitor on much than just logs and not everyone uses all facets of their offering), you would presumably have some way to access them more directly (even tailing output directly if necessary).
And lastly, why would you communicate to your customers without any idea of the scope or cause of the issue? It would likely be clear very quickly that Datadog was having issues when you see that all your metrics are suddenly discontinued without other ill effect.
>knowing alerts are down, you can actively monitor using other tools/reporting for the duration of their incident.
If you just want notifications for when datadog is down, their StatusPage does a fine job of clearly communicating incidents.
I wouldn't want to rely on a "when multiple of our 'missing business metric' monitors alert, check and see if datadog is down" step in a runbook. I don't like false alerts. I don't like paging folks about false alerts. Waking up an oncall dev at 2am saying all of production is down when it is just datadog is bad for morale. Alert fatigue is a real and measurable issue with consequences. Avoiding false alerts is good. If the notification says "all of production is down" and that isn't the case, there is impact for that. I'd much prefer having a StatusPage alert at a lower severity and communication level say "datadog ingestion is down".
Instead, use their StatusPage notifications and then execute your plan from that notification, not all of your alerts firing.
>And why would you have no logs?
I mean Datadog logs/metrics etc. Currently, we are missing everything from them. We can still ssh into things etc, they aren't gone, but from the Datadog monitor's view in this scenario, they stopped seeing logs/metrics and would alert if Datadog didn't automatically mute them.
>why would you communicate to your customers without any idea of the scope or cause of the issue?
We prioritize Time To Communicate as a metric. When we notice and issue in production, we want customers to find out from us that we are investigating instead of troubleshooting and encountering the issue themselves, getting mad, and clogging up our support resources. Flaky alerts here don't work at all for us.
> We were a victim of a teleconference or Zoom hijacking and we are trying to understand what we need to do going forward to prevent this from ever happening again
I don't know if there are more details elsewhere, but I feel like a solution here is to not host the meeting in such a way that allows others to share anything (or at least only allow authorized users to join)? Although I would think that a 'broadcast' of the meeting followed by text-based follow-ups from people not physically present would be an even better system.
Maybe there are more requirements that I don't know about.
I think what likely happened is that the zoom call was started by a non-technical person who didn’t realize how to properly configure the permissions on the meeting. So once the meeting got started it turned out anyone had permission to share their screen and hijack the call with whatever they wanted to show.
I use zoom for a couple groups I'm park of. It's actually hard to set screen sharing on by default. To the point where if I use the link I sent out to join not as the meeting organizer, I have to leave and come back before anyone can share. My work zoom link gets around this restriction..
The fact that zoom full screens any share makes this worse.
At this point most organizations should have someone adept enough to boot misbehavior. It's a little tricky.
Hold the meetings in person. Zoom has been a disaster for collaboration. If people try to bomb your in person meetings you can have the police arrest them for trespass.
By understanding priorities I mean: the tech lead has to be in sync with management (of the team and often other leaders of the org) about what needs to get done and what can be cut if there isn't enough bandwidth. Weak tech leads in my experience don't have a sufficient grasp on changing priorities, which results in the team working on things that don't get rewarded properly / don't pay off and/or loading up the team with work that could have been deferred. Some of this is the manager's job, but often it falls to the tech lead to estimate the true technical 'size' of what is being asked.
By acting on leading indicators of success, I mean: the tech lead will ideally not be doing the majority of execution on a well-staffed team. They should be doing some execution work to ensure the codebase is sufficiently easy to work in etc, but most importantly they need to know how to figure out whether or not something is on track without sinking too much of their time to do so. Setting up milestones and some target date helps with this, but it's often uncomfortable to do that with folks that were recently your peers (it still needs to be done).
I don't have books or other resources, but this has been my experience as I transitioned into similar roles. I also think my experience may skew more towards a 'manager-tech-lead' than a pure tech lead, so take that with a grain of salt. Good luck!