This just reads as a random mishmash of missteps the company has taken over its 33 years of existence (remember, it's older than Amazon or Google) rather than a proper critique of when it arguably lost its shine in the public's eyes. The company still generates a ton of money, continues to set records for day 1 sales of its games, and owns extremely valuable IP in the gaming industry, so you can't really say it hasn't adapted well to the current industry norms. At times, it practically sets them. It has certainly missed a lot of opportunities, such as turning BattleNet into a public digital storefront before Steam, or capitalizing on the MoBA genre that spawned from their own games before competitors did, but I doubt that they would have had as much success even if they did because their approach would have been different.
Jason Schreier's recent book covers some of the game cancellations. The Warcraft adventure game was cancelled after they flew out one of the best designers in the genre for a week to try to make it work, and make it fun, and couldn't. It was a game that was outsourced to a different company, and they didn't feel like it was up to their quality standards to ship. Shutting down Blizzard North came about as a consequence of the distance between them and HQ, leading to a different studio culture that became difficult to manage, and the uncontested resignation of Blizzard North's executive team when they tried to make demands from Blizzard's owners, Vivendi.
Polygon [1] covered the Starcraft: Ghost game. Long story short, it got canned because it was in development hell for too long. Originally under development by a studio in the Bay area, there apparently wasn't a dedicated Blizzard producer to the game for the longest time, and the idea of what it should be kept changing as new games came out and HQ wanted them to copy those ideas. At some point, Blizzard shifted development to a different studio just miles away from them because they wanted multiplayer, but the same issues persisted. And then they released WoW, which consumed all of their attention. With the release of the gen 7 consoles around the corner, requiring further investment, they made the sensible choice to shelve it so they could focus their time and money on their new cash-printing machine instead.
Experimentation is important for finding the fun, and cancelling what isn't working is a required part of the process. And while, yes, there's a ton of games in the Blizzard graveyard, they're no exception. Valve has a list of cancelled games that's probably just as long. And they're all the better for it. Titan died in favor of Overwatch, Nomad died in favor of World of Warcraft.
>This just reads as a random mishmash of missteps the company has taken over its 33 years of existence (remember, it's older than Amazon or Google) rather than a proper critique of when it arguably lost its shine in the public's eyes. The company still generates a ton of money, continues to set records for day 1 sales of its games, and owns extremely valuable IP in the gaming industry, so you can't really say it hasn't adapted well to the current industry norms.
At the expense of being treated almost as bad as people treat activision.
>It has certainly missed a lot of opportunities, such as turning BattleNet into a public digital storefront before Steam, or capitalizing on the MoBA genre that spawned from their own games before competitors did, but I doubt that they would have had as much success even if they did because their approach would have been different.
Afaik they never tried to compete with Valve with a Steam alternative shop, this only came about way later with Activision releasing their games onto BattleNet platform.
>Jason Schreier's recent book covers some of the game cancellations. The Warcraft adventure game was cancelled after they flew out one of the best designers in the genre for a week to try to make it work, and make it fun, and couldn't. It was a game that was outsourced to a different company, and they didn't feel like it was up to their quality standards to ship. Shutting down Blizzard North came about as a consequence of the distance between them and HQ, leading to a different studio culture that became difficult to manage, and the uncontested resignation of Blizzard North's executive team when they tried to make demands from Blizzard's owners, Vivendi.
Outsourcing those games was then the issue, they should've either done it in-house or tried to work with a more well known company, since afaik it wasn't exactly done by LucasArts or Seria but the same studio who did the Zelda games made for the Philips CD-i.
Same thing goes with SC: Ghost, and as you point out it was rife with mistakes that screwed it all up.
>Experimentation is important for finding the fun, and cancelling what isn't working is a required part of the process. And while, yes, there's a ton of games in the Blizzard graveyard, they're no exception. Valve has a list of cancelled games that's probably just as long. And they're all the better for it. Titan died in favor of Overwatch, Nomad died in favor of World of Warcraft.
I agree to an extent, you can experiment as much as you want, but if it keeps on happening without much change within the company, there's probably something systemically wrong within the company, which was the case for quite some time with Blizzard.
I don't think the license change is unwarranted. At a previous employer we used Terraform but the pricing on the cloud/enterprise offerings was prohibitive enough that we instead had a dev create simple wrapper scripts in our CI/CD system to run the deploy jobs. Significantly cheaper, but I spent years pushing for us to eventually move to the paid offerings as the developer experience was significantly lacking (and to support Hashicorp), up until I left the company. I think they're still using those wrappers today despite how awful they were to use.
There was definitely room for improvement around using Terraform to do actual deployments. From better UX around doing PR's -- showing not only the commit diff but the output of a "tf plan" as well to see what it might actually do -- to actually running the deployments on isolated build machines that could hold the sensitive cloud API keys and provide a deployment audit trail, these were all features that teams absolutely needed to use Terraform sanely.
As a solo developer you don't really need those features, but if you're on a team you definitely did, and were almost certainly willing to pay for it. Hashicorp recognized that need and created the cloud/enterprise offerings to provide that.
At some point the thought even crossed my mind of creating some open-source tool that could provide a nice enough web interface for dealing with Terraform for teams, building on what we had and providing the features I listed above, but the main reason I didn't was because it would be biting the hand that feeds. Such a tool would take away people's incentives from using Hashicorp's paid offerings and ultimately reduce their investment in Terraform and their other fantastic tools, and in my opinion, be disrespecting the tremendous work Hashicorp had done up to that point. I've been a user of their stuff since they only had Vagrant, and of course have loved them seeing them succeed.
It seems others, however, had different opinions and saw a business opportunity thanks to the permissive licensing and the high costs of Hashicorp's paid offerings. Plenty of money to be made from making it easy to use TF in teams, especially when you're not obligated to contribute back or maintain the underlying software [1]. Any time I saw a "Launch/Show HN" post from a company that was offering such TF wrapper web interfaces, I kept being surprised that Hashicorp hadn't yet clamped down on preventing lower-cost offerings of their paid services. It was only a matter of time.
[1]: I realize this reads as overly harsh to some of these companies, especially as some of them are in here replying and pledging to give back, so let me try to explain my reasoning here. When I use a product, I like it when the source is available from me to learn from and understand how it works [2] and to contribute back to for needed features or bugfixes [3].
When a company makes a product open-source, that's great! But if that product is the core of that company's business model [4], and another company starts competing with that company using the same open-source product, then I see a problem down the line. While you can make the argument that the competition is good and motivates the two companies to compete on the value they bring to their customers, which is a net-benefit to the open-source ecosystem as a whole as the open-source product is improved, it eventually turns into a race to the bottom. Pricing will be used as a core differentiator, reducing the overall R&D spending on the open-source product because ultimately the two companies have to maintain non-R&D staff like sales, finance, and support. If the Total Addressable Market is fixed (obviously not, but work with me), then that's two or more companies with the same fixed non-R&D costs diverting revenue that could be spent instead on improving the open-source product. Sure, the reality is that a lot of that revenue isn't going back to the open-source product, as a lot of people are complaining about in the comments, but that diversion is probably going to happen anyway whether there's 1 company or 20, so I'd accept it as a cost of doing business.
If instead the competition were on providing a better but different open-source product in the same space (e.g. Pulumi), rather than working off the same base, that would be a different story. But if developers keep seeing businesses take open-source projects and directly compete with their creators, then I think we're going to see a net harm to the open-source community as it creates a sort of chilling effect as it'll demotivate them from going the open-source route so that they can find a viable way to sustain their efforts. I think licenses such as the BSL and SSPL are valid enough compromises, considering that even mentioning the AGPL inside of a lot of companies seems to be like someone saying Voldemort's name. We can't rely on large corporations sponsoring open-source projects, either with money or developer time, if we want them to succeed.
We grant inventors 20 years of exclusive-use on an invention, provided they explain how to reproduce it through the publishing of a patent. What's the difference between that and the BSL? I see a lot of complaints about bait-and-switches, but I don't really see the issue. If you contributed to the project under the old license, it's still available under the old license! You just don't get any of the new changes starting from the license change. If you decided to use Terraform in a non-competing way [5] solely because of the old license, and are concerned about the new one, then you have to recognize that Hashicorp is now another addition to a long-line of "open-core" companies trying to deal with the reality that companies will make money any way they legally can. This is where the industry is currently headed, and whatever replacement you find will probably be next.
If you believe different, then make an open-source offering, and don't just make a public statement saying it'll be open-source forever. Public statements are great and all, up until there's doubts about meeting payroll. Find a way to make the statement legally binding and then we're talking. Which is I guess why there's so much consternation, since the way to do it is through the license, but the OSI doesn't recognize any of these other licenses as "open-source" and the AGPL is a non-starter at most companies.
[2]: Reading the source code for libraries I use has been incredibly valuable in my understanding of how to use the libraries properly, much better than any documentation could. And of course, makes me a better programmer in the process.
[3]: At one point, Terraform was missing a feature that I badly needed. With the source available, I could easily get a new version of it running locally with that feature to unblock me, and then everyone benefited when I contributed it back to the project. It's also been invaluable having these locally modifiable builds to understand the quirks of products from cloud vendors, and to work around them. Ever had multiple deployment pipelines fail because Azure decided to one day change the format of the timestamps they returned in API calls, without publishing a new API version? I have.
[4]: As opposed to supplementing their business model. Google open-sourcing K8s was great for them because it drove adoption of their cloud VMs. Their cloud business makes money off the VMs, not GKE, so sponsoring K8s is essentially a marketing expense. But for Hashicorp, their core business model is paid offerings of their products.
[5]: Yes, I get that the license currently is un-clear, for all their products. But let's simply say that you're not trying to directly sell a wrapper around running Terraform.
Is this a new requirement? I remember for Kubecon Seattle 2018 using an iOS app (named KubeCon+CNC) that was nothing more than a schedule viewer for the event. It didn't even use native views and given how narrow the audience for the app was -- attendees for one of two 3-day conferences -- I was surprised at the time that it even existed as an app and had made it past the app store review.
I also know of at least one app, a sanitary self-certification app that allows your entry into a European country by generating a QR code to display, that is also basically a shell around a simple web app with a form. It absolutely should be an app given how convenient it makes accessing the entry requirements and generated QR code, but its existence does make these app store requirements seem absolutely arbitrary.
For what it's worth, I hope you do end up writing more posts in the future. It's easy to overlook or forget the pains and lows of projects in hindsight, especially for particularly successful ones like the ones you've worked on. Your blog posts have been some of my favorites because they don't gloss over the sacrifices required for those successes, a reality that Blizzard's historically secretive culture tended to hide a lot of. Your work more than speaks for itself, and you've definitely earned the right to be boastful of it.
I was actually excited to see your blog domain pop up on HN today, only to find out it was to an old post. You have a ton of valuable insight and knowledge into games and project management and I hope that commenters like the one you replied to don't put you off from sharing that with the world.
> But something happened after the success of WoW the company grew quickly and with that came some bad people and after the merger, Bobby Kotick could exert his influence.
I don't think this was the case, and that rather WoW's success forced the company to become more professional but the sexual harassment culture was so ingrained that the company couldn't shake it off. The name thrown around the most due to the lawsuit was Alex Afrasiabi, who was hired in early 2004 to work on WoW (which released November 23, 2004). You can also take a look at WoW's credits [1] and see the mention of "sexy HR girls" or their internal tenth anniversary video [2] from 2001 that covers why they hired their first female employee.
A couple of weeks back the youtube algorithm decided to show me the youtube channel of one of the Warcraft 3 level designers who worked for Blizzard from 1998 to 2003, and in one of his videos, made 4 years ago mind you, he's pretty clear on what the culture was like there [3], with the choice quote "if I wanted to bring a sexual harassment lawsuit against Blizzard ... I could have easily done that."
The guy also said that Rob Pardo (lead designer of Starcraft, Warcraft III, and WoW, and now founder of Bonfire Studios since his ejection from Blizzard in 2014) was particularly toxic. And it seems he's not the only one with the same sentiment [4]. The other Rob from that tweet I'm guessing was Robert Bridenbecker [5], who was with the company since 1995. These were people that had to have been working closely with Morhaime due to the nature of their positions, so keeping them around for so long despite the outstanding HR complaints doesn't bode well.
Is it worth following and supporting the new gaming ventures if this is the culture those veterans fostered and had grown accustomed to?
I've seen it. I have also seen a lot of disgruntled employees and it's not obvious to me that I should take anyone's word over senior management.
With the kind of success that WoW had it goes to your head. Some people get corrupted by it. It can bring the worst out of some people. That probably happened.
> This is the money they could have used to have a buffer to deal with these situations or to improve their systems.
Matt Levine covered this last year [1]. The basic gist of it was that the CEO is focused on the shareholders, and the best use of the money was on stock buybacks. Spending money on improving customer or labor relationships wouldn't have helped during the start of the pandemic when all the airlines were stuck in the same boat unable to fly planes, and the cash used by e.g. American Airlines for buybacks in the past 7 years to increase the stock value 113% would have only bought them 4 months of operating expenses. The most long-term value for shareholders was created through the buybacks, and the government being willing to prop the businesses up during downturns reduces the risk exposure from this strategy.
Thanks, I like Levine and remember reading that. He basically argues that airlines financial strategy was optimal for shareholders, given covid and guaranteed government support. But firstly, he only considers the two uses of money proposed in the NYT, improving customer service or reducing the debt burden. Secondly, his whole argument is predicated on airlines being bailed out by government - which is true, but I believe shouldn't. I personally believe a lot of value is being destroyed by lack of long term investment and short term incentives, leading to problems such as these ones. And finally, even Levine admits that buybacks might be suboptimal for other stakeholders (eg employees, clients).
Airlines with government backing end up with the only rational choice being to take more risk.
I'm reading a book right now called "The Power of Nothing to Lose" which explores the primary and second order effects of individuals and companies being put into situations where they literally have nothing to lose. It's been an interesting read so far - recommended.
This isn't specific to the M1. I had my MBP 2016 13" die in the same exact way about two years ago. Same failure mode where it just refused to boot no matter what I did, though it was possible to sort of turn it on if you left it alone for a couple of days. Was able to use that to pull the data off the machine and create a Time Machine backup, but it turns out that Time Machine can lie to you about having completed a backup so I would have lost all of my data were it not for the manual backup I performed.
What was frustrating to me was that I knew my model had a diagnostics port that allowed direct access to the SSD (see https://9to5mac.com/2016/11/24/apple-special-cdm-tool-macboo...) to pull the data off of it, but the service wasn't offered when I went to get my machine repaired. I was forced to accept the data loss from having the logic board replaced.
I later had bluetooth issues on the same laptop, and that required another logic board replacement. Which, surprise, came with data loss once again. Also had a lot of fun learning with that repair that Apple changed the extension backups are stored as, and Time Machine can't recognize the new one as a restore source for some reason unless you rename the folder to use the old one. Wasted quite a few hours on that, and Apple's phone support had no idea about why Time Machine wasn't seeing it.
Anyway, moral of the story is, backup often, and have more than one backup method, unless you like losing your data.
> Anyway, moral of the story is, backup often, and have more than one backup method, unless you like losing your data.
And I would recommend that no one use Time Machine for backups. I have several anecdotes of my own where TM backups became silently corrupted or a seemingly good backup refused to restore, and you can find many more examples in the Apple forums. An unreliable backup isn’t a backup at all.
I’ve come to the conclusion that Time Machine itself is fundamentally broken and wouldn’t trust it with any of my data. If I went back to using a Mac I’d find some other backup application.
On the other hand, all of my experiences were not recent; maybe Time Machine is fine now. Or maybe the corruption is random enough that you’re safe with 4 copies. In any case, I’d check on your backups regularly to make sure they’re still good.
Your repurposed analogy isn't quite right. Google is providing a service by running around and grabbing those samples. Value is provided not only to you, the searcher, by receiving an answer to a question you posed, but to the coffee shops as well by providing them with a potential customer. Is Google not allowed to profit off of providing this value for the work they're doing?
A coffee-shop owner could argue that Google is providing enough coffee that it's disincentivizing someone like you from making a purchase, but the issue here is that Google provides tools to the coffee-shop owners to opt-out of giving those samples to potential customers. Instead of using them, the owners want to force by law that Google pays them for the samples because they still want Google to do all that leg-work of finding new customers for them. This could make it cost-prohibitive to run the service, especially as the legislation can be interpreted broadly (e.g. Google says that as written it'd be difficult for an algorithm to distinguish between news and non-news content).
I think a better analogy would be around restaurant delivery drivers. Pretend for this analogy that restaurants have a captive supply of delivery drivers that can't go off and get another job, because all the drivers know is how to drive restaurant deliveries for a living. Restaurant owners are incentivized to use the delivery drivers because it allows them to get more customers. What if the owners started asking for a law to be passed that made delivery drivers have to pay them for providing delivery service. The drivers can still make money in the end after the tips they get anyway, right?
What's frustrating to me is that even though a lot of these frameworks and tools keep advertising themselves as making you more productive and letting you write less code in the pursuit of better client experiences -- ie, creating an "SPA", a single page application -- they keep adding more tools your way and more required boilerplate for you to write. You can argue that it's inevitable as they "mature", but then someone comes along a year later with a fix to those problems and you're forced to relearn everything to keep up. This cycle then starts over anew for the next 2-3 years as more tools and required boilerplate are added and someone gets frustrated enough to provide a new framework and tooling.
I've gone through this, in turn, as I've developed applications from plain old JS, to jQuery, to Backbone.JS/Marionette, to AngularJS, to Angular, and most recently to React, along with the similar change in tooling through Grunt, Gulp, and Webpack. Each time as the amount of boilerplate I'm forced to write begins to frustrate me, a new framework conveniently rolls around and convinces me to switch. I haven't looked at Vue.js yet, but a co-worker on a different team showed me a React/Redux project and I ran away screaming -- figuratively, not literally! -- from the boilerplate involved (if you're wondering, so far I've been pairing React with MobX but the applications haven't been particularly big). The dev experience has been better each step, but it's not been an easy ride and it's obvious that these frameworks don't represent the end of the chain.
All this time I thought this was just the required tax for providing a nice front-end experience, but recently I've been diving heavily into Blazor Server-Side and Phoenix LiveView on some personal toy projects to see what the server-side landscape is like, and it's a breath of fresh air. It's surprisingly quick and easy to write code that provides an SPA experience where changes on one connected client immediately propagate to other connected clients, and with so little code to boot. They have their pros and cons, but the dev/productivity experience has felt so much better. Because C# is such a great language, Blazor is super easy to jump into and immediately understand what's going on, and since it's part of ASP.NET you're not going to lack in good libraries/documentation/examples/support. However, it doesn't really have a good reload-on-changes functionality yet even while running under dotnet watch, as you'll have to manually reload your browser tab after changes. Phoenix LiveView on the other hand has a great reload-on-changes functionality for fast iteration, but its drawback is that it has a much steeper learning curve and initial hurdle. Fortunately it's offset by the fantastic documentation it provides, but it is something to keep in mind as it makes on-boarding people not used to Elixir/Phoenix a bit difficult.
Honestly, if the OP article resonates with you, I recommend giving one of those two server-side frameworks a spin. It's nice to temporarily escape from the craziness of the modern JS landscape, and to not have to write your models twice.
I remember watching a GDC talk of JAM (https://www.gdcvault.com/play/1018184/Network-Serialization-...), which was Blizzard's solution to network serialization for WoW back in 2004. The talk even had performance comparisons against protobuf and it seemed like a decent alternative that worked well for them. Was this new alternative an extension of JAM, or a full rewrite? I know they have millions of players monthly, but I can't imagine saving a bit on bandwidth could have warranted a full rewrite, especially when the majority of their traffic is from their own datacenters.
Jason Schreier's recent book covers some of the game cancellations. The Warcraft adventure game was cancelled after they flew out one of the best designers in the genre for a week to try to make it work, and make it fun, and couldn't. It was a game that was outsourced to a different company, and they didn't feel like it was up to their quality standards to ship. Shutting down Blizzard North came about as a consequence of the distance between them and HQ, leading to a different studio culture that became difficult to manage, and the uncontested resignation of Blizzard North's executive team when they tried to make demands from Blizzard's owners, Vivendi.
Polygon [1] covered the Starcraft: Ghost game. Long story short, it got canned because it was in development hell for too long. Originally under development by a studio in the Bay area, there apparently wasn't a dedicated Blizzard producer to the game for the longest time, and the idea of what it should be kept changing as new games came out and HQ wanted them to copy those ideas. At some point, Blizzard shifted development to a different studio just miles away from them because they wanted multiplayer, but the same issues persisted. And then they released WoW, which consumed all of their attention. With the release of the gen 7 consoles around the corner, requiring further investment, they made the sensible choice to shelve it so they could focus their time and money on their new cash-printing machine instead.
Experimentation is important for finding the fun, and cancelling what isn't working is a required part of the process. And while, yes, there's a ton of games in the Blizzard graveyard, they're no exception. Valve has a list of cancelled games that's probably just as long. And they're all the better for it. Titan died in favor of Overwatch, Nomad died in favor of World of Warcraft.
[1] https://www.polygon.com/2016/7/5/11819438/starcraft-ghost-wh...