Hacker Newsnew | past | comments | ask | show | jobs | submit | niemeyer's commentslogin

> Either take my code without a contract or live with the bugs

The less simplistic view of the situation is that major maintainers (corporate or otherwise) pay the price of keeping the project up for the long run, under the defined license, in all senses including the legal one. So the CLA is the way to integrate contributions without strings that remain legally attached to the author of these contributions. So the CLA is, ironically, the way for these major maintainers to say "no strings attached or wait until we fix the bugs" back to you.


I'm sad to see Stephane leaving, but it's lost in me how his departure towards doing whatever else he wants to do suddenly transforms into LXD not being open?


It was always pretty clear that something would have to get adjusted the day Stephane left the company, because it's a major project for Canonical and Stephane preferred to run some of the infrastructure himself. From his own comments in the forum:

"In theory Ubuntu Discourse should be more reliable in that it’s run by Canonical IS who has a 24/7 team of people looking after services unlike this forum where I’m the one running the infrastructure and dealing with outages."

Nothing is changing about how in the open LXD is developed.


I don't know who you talked to, but I can tell the story is richer and more interesting than that. Saying that snaps have performance issues is similar to saying that containers have performance issues. They do affect performance because doing something is always more expensive than doing nothing, and snaps do something in addition to just running a bare executable on your machine. At the same time, the kind of operation that snaps perform should not have a significant impact on a modern computer to the point of making it slow or annoying, because most of the operations are relatively simple and happen at a low level, and computers are fast.

At the same time, snaps are a new packaging format, and when you change the layout of applications to include things such as restrictions or making things read-only, suddenly all kinds of things can go wrong, and some of these can cause major performance impact.

Two easy and real examples from the snap world: early on there was a bug where .pyc files would be out of date, and the filesystem was read-only. This meant every single time the application was opened Python would recompile the entire application and fail to write its cache files in every case. Major performance impact. That was fixed.

Another one: fontconfig cache changed its format, and as a side effect applications running could not make use of the one in the system and had to rebuild their own copy every time. Extreme performance impact. That was fixed.

And the list goes on. So the point is: snaps are not slow, because there's nothing fundamental happening there to make them slow. But snap applications can be slow, of course, potentially by orders of magnitude. These are bugs, and we fix them when we see them.


Hey thanks for the detailed answer.

In my experience everything i opened had significantly longer startup times - especially for VS Code this was a problem for me. I can not explain why, i just experienced the symptoms as did a few other people i talked to.

So if it is just an app problem great, hopefully there will be a day when not every app with UI i try has that problem.


That's exactly it. I work at Canonical and was part of the internal conversation around this subject. We constantly walk that fine line where we want to encourage open source work and communities around it to flourish, while at the same time we need to pay for bandwidth and people's salaries to be working on that exact technology. The irony is that for the particular case at hand, they would probably get it for free because despite being a commercial project it's a small one at that, and we love to see such initiatives taking place. At the same time, we work with major industry players that are supposed to pay the bill, for their own benefit and for everybody else's too, otherwise we just go out of business and that's no good. It took time mainly because we need to set the exact terms without arbitrary discrimination.

We'll have a more clear form for that kind of application soon, so that we can streamline such requests, community or otherwise.


> It is a pretty nifty idea but like all things made by Canonical it is basically digitized garbage.

> At least those are not made by people with a disdain for error checking.

Every single time there's a topic about Ubuntu or Canonical you seem to go straight into offensive mode. I can tell you use Fedora, but that's not typical of Fedora users and developers and no excuse.

I'm an open source developer and believer, but this kind of behavior slowly burns my soul, even more when it seems accepted. I've worked on enough projects in my life that it's absolutely certain that you use my code regularly. It's inside Go, Python, APT, and RPM by the way (hello Jeff Johnson, wherever you are), and in key libraries that you surely depend on as well. And I never heard you complaining about any of that with such anger here.

I'm also Canonical's CTO, and I was one of the key designers and developers that started snaps, and juju, and other key projects from Canonical. I usually hear such blind hate in silence, but sometimes it's just too much. I don't understand why do we do that to ourselves, as a Linux community. Why is it okay to openly offend unknown people that we almost certainly depend upon? What is it that we came here to do, again?


Indeed the size calculation is incorrect. It's likely looking at the unpacked size, but snaps are never unpacked, which ironically is a difference missed by itself.

These are the actual sizes for these snaps:

  $ snap info vlc | grep stable:       
  stable:    3.0.4                   (555) 204MB -

  $ snap info libreoffice | grep stable:      
  stable:    6.1.2.1 (86) 501MB -

  $ snap info gimp | grep stable:
  stable:    2.10.6 (47) 192MB -


Yes, the article indeed is on the low end. It focuses mainly on the package sizes, and gets it very wrong as mentioned in other comments.


Yes, the size is wrong. It's looking at the unpacked size, but the snaps are never unpackaged. Here are the actual sizes for the mentioned snaps:

  $ snap info vlc | grep stable:       
  stable:    3.0.4                   (555) 204MB -

  $ snap info libreoffice | grep stable:      
  stable:    6.1.2.1 (86) 501MB -

  $ snap info gimp | grep stable:
  stable:    2.10.6 (47) 192MB -


There's a long thread with in depth discussion about this:

https://forum.snapcraft.io/t/disabling-automatic-refresh-for...

For those that understandably won't want to go through it all, the short version is that by design snaps will force the update eventually, so that a system isn't simply left behind, but since snapd came out a few years ago we've been constantly working on multiple methods to offer control over when exactly the update takes place. These are features such as:

- Fine scheduling of updates (https://forum.snapcraft.io/t/refresh-scheduling-on-specific-...)

- Disabling over metered connections (https://forum.snapcraft.io/t/snap-refresh-over-metered-conne...)

- Holding of refreshes after boot (https://forum.snapcraft.io/t/delaying-refreshes-and-registra...)

- Manual delaying of updates (can't find topic)

So, the goal is actually to offer control, but we are indeed trying to prevent systems from getting out of date for good. Maybe that's a bad idea, and if it turns out to be we can change that in the future, but we've been making an honest effort to try to fix the problems of automatic updates instead of simply giving up. Once we give up, there's no going back since the dynamics around package updates will change. We have plenty of experience around these aspects with the traditional systems.


Sorry, but unless there's an option to hold back the update until I explicitly allow it, that's not being in control. This is still my computer and I decide when to update software.


This is this being downvoted? He is exactly right.

The ability to pin at specific version is needed not just because, but for multitude of reason: the newer version breaks something, or you have only license for up to certain version (think non-subscription Jetbrains products), etc.

The inability to disable updates removes that packaging system from further consideration. It is a showstopper.


The comment is mistaken. I suspect it’s being downvoted not because update control isn’t appreciated, but because it’s very much there with snaps. There are several ways to disable snap updates, and they are really quite nicely balanced for modern operations.

For example, if you publish a snap that depends on another snap, say an app which uses a database, you can set things up so the database won’t update until you publish a validation certificate that your app snap version X has been validated with database snap version Y.

Updates can be deferred by anybody, and I think there is a plan for snaps themselves to be able to defer their own updates (for example, a movie payer that is playing a movie at the scheduled update time).

Enterprise management systems can also control the flow of updates very nicely. For example, they can have a different snap revision as ‘stable’ or ‘beta’, which means they decide when a new revision of a snap will be considered for update by all the machines tracking those channels. They can also prevent any updates from happening on specific machines.

Device manufacturers also get a layer of control, similar to snap publishers with their dependencies. So an appliance that uses snaps might see specific revisions of snaps only once those have been certified on that device.

Considering how rich the actual reality of ‘software update distribution and managment’ is in practice, it’s nice to see that level of thinking built in to the system. We’ve had simplistic approaches around for decades and the results in practice are too or, there are millions of vulnerable machines out there because of neglect. I’m interested to see if these mechanisms achieve a better result all round, and the simple thing you are focused on is certainly already there.


Tl;dr. Your computer isn't yours, and "We" know better than you, lousy user.

(That's what I took out of the shitty design decision and disgusting justification you wrote. I'll be damned if I have to defend myself against open source apps in some shitty all-in-1 format with fascists at the helm.)


Could you tone it down a bit?

Your opinion is one shared by many on HN regarding forced updates, but the parent comment has made a proper effort to answer your questions and you resort to name calling.

I am all for freedom of speech but if you write it on a brick and throw it through my windows you're going to have a hard time getting me on your side.

Comments like these alienate people from what you're trying to accomplish.


> Could you tone it down a bit?

So, a decorum "argument"... Every last thing I said was true.

I'm putting the Snap maintainers in the same bin as Facebook, Microsoft, and Google. They want to idiot-ify the user experience with "We know better than you" style of rules and degrade MY ownership.

For that, yes, I will show anger. Even if this is -1'ed and buried, I'm sure one or more of the maintainers saw it.


If it makes you happy, yes, I see it. But that doesn't make things better for anyone. We've had more interesting discussions around this issue where people could actually present good arguments towards more control, and some of these conversations resulted in the development of more control features, as presented earlier in this thread.

Also, it's important to realize that it's not me or Canonical that has control over the updates, so it's not me knowing better than you. The goal of this exercise is to have good tooling that would allow updates to flow between a publisher and a user with a better overall outcome.

That means, for example, that we are putting more pressure on publishers to get it right, because they will more quickly and obviously break people if they release something broken. There are actual high profile publishers that changed their processes because of that.

We are also putting more pressure on the tooling, because we need to be able to recover gracefully when the update does fail, and that's one of the reasons why we have a more polished transition and rollback mechanism than any package manager out there.

Yes, maybe it won't work, but it's a very interesting problem and is worth solving. Then, even if we don't fully solve it, the exercise will have been worth it, because it improved all those aspects in meaningful ways.

But I hear you... you're mad at me. Point taken. :-)


Hi, I'm a poster somewhere up in the thread, not crankylinuxuser.

However, I still have an impression from seenitall's and your comment, that you are solving the wrong problem.

- dependencies - that's something the traditional package managers solve very well :)

- temporary defer updates while running: not interrupting the user is a different issue than not updating. (In my opinion, Flatpak solves this elegantly: it can tell the running application, that a new update is installed, and when it is convenient to the application, it can restart itself. Until then, both versions are available.)

- not everyone uses EMS; I'd say that no SOHO and only some SMEs do; so here you are creating a problem for power users and small businesses; notwithstanding, that your competition (Flatpak) has the stable/beta/whatever channels too, without needing EMS or other tooling;

- device manufacturers were always able to do this with traditional package management (using a metapackage that depends on specific versions);

- some of the problems above seems to stem from the insistence of snap to have a single source of truth, under a control of single entity. All the other package systems avoid some of the issues by being decentralized. For example, you can have your own apt/yum/flatpak repository and you can control, what packages get in. AFAIK, this is impossible with snap, thus needing to implement it's own solutions for a defined scenarios.

- the above scenarios are still missing the crucial component, that crankylinuxuser points out: the user's consent to update. Neither poster in this thread addressed this concern.

The issue is, that you cannot rely on vendor/someone upstream to certify the solution for you, because they may be wrong. When it is wrong, it will break your system and the vendor will not be quick enough (or even willing) to un-break your system.

My example: we are using a certain well-known ETL tool (I won't name it here, no point in shaming them), that has a bug since some version, where it drops characters in certain Unicode range. Most customers do not have a problem, only those who are unfortunate enough processing data in a language that falls into this range. The vendor has a bug report in their bug tracking system, they are claiming to work on it, and they semi-regularly do new releases - without the bug fixed.

Of course, nobody that needs that specific Unicode range can update. Nobody in this context means a handful of users worldwide, i.e. a small fraction of percentage of customers, so that means the bug fix priority is not exactly a big one.

With traditional yum, it is easy enough to solve - just exclude that specific package from updating. With your competition, Flatpak, it would be easy too: just do not update that package. At installation time, they can still install the "old" version, when needed.

Here, the vendor certification would be not enough - the application works for most users, it's just too bad when it doesn't work for you. In the absence of EMS, there would be not much what the users could do.

And that's where the issue "someone else has control over update process, not me" comes from.


> - the above scenarios are still missing the crucial component, that crankylinuxuser points out: the user's consent to update. Neither poster in this thread addressed this concern.

Indeed. Lack of control is my biggest source of anger.

I'm accustomed these days that most devices are under some sort of remote control/ownership from the "mothership". Free Software and friends have made it so we could avoid this, and retain ownership of our systems.

On Windows, reboots are a fact of life. I was in an MRI last year that took 1.5 hours. The MRI itself was a 1/2 hour, but the required mandatory update came in when I showed up.

I also know people who 3d print have used Windows, reboot cycle update, and lost their print.

Long story short, Windows, Mac, iPhones, and Androids are not our devices. We at best rent them. Ownership = control.

So when some open source group wants to centralize and force updates like this, its because they are trying to fight for ownership of my devices. Ive fought long and hard to free myself from most onerous software. And I see it now popping up in what was once a bastion of freedom.

So yeah, I'm angry. And yes, Ill do what I think is right in terms of impeding this.


> So, the goal is actually to offer control, but we are indeed trying to prevent systems from getting out of date for good. Maybe that's a bad idea

You think? Someone saw all the complaints about Windows 10's forced updates and thought "we should totally do that too"? Seriously?


Wow this is almost windows level of terrible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: