Hacker Newsnew | past | comments | ask | show | jobs | submit | slicedbrandy's commentslogin

It appears Microsoft Azure's content filtering policy prevents the prompt from being processed due to detecting the jailbreak, however, removing the tags and just leaving the text got me through with a successful response from GPT 4o.


I had a look into KDE connect out of interest, but found it to be quite lacking in functionality for iOS. The limitations likely stem from Apple's locked down approach to their OS and security, but the limitations remain.

https://userbase.kde.org/KDEConnect#Missing_or_limited_featu...


Also did something very similar, but swapped out the storage layer with an embed of leveldb.

Also supports both an HTTP and Redis API.

https://github.com/tomarrell/miniqueue


Interesting that it supports the Redis API as well!

As noted elsewhere here, it was important to me to use SQLite, since that's what I use as the storage layer already. But I've added HTTP adapters for remote clients.

I hadn't thought of using a different remote API, interesting idea. Thank you for the pointer!


Interesting, I took this to heart a couple of years ago and came up with something with a focus on simplicity to solve our niche issue at $fintech.

https://github.com/tomarrell/miniqueue


Now time to update interface{} usage and shift to any.

  $ gofmt -r 'interface{} -> any' -l **/*.go
Replace -l with -w to write.


Is there an actual difference between these or is it just aesthetic? (and shorter)


any is an alias of interface{}, so purely aesthetic.


well there goes my Hacktoberfest idea


Hey HN!

Out of frustration at the operational complexity of some brokers, I decided to strip back on features and focus on simplicity. A by-product of this simplicity is that it's damn fast*.

MiniQueue is not distributed by design, instead, it's backed by embedded LevelDB for persistence.

It can also be easily used as a simple companion job queue, supporting delaying messages for arbitrary periods.

Would be great to hear some feedback should you try it out. Cheers!

*Benchmarks can be found in the README


SumUp | Golang Engineers | Berlin, Germany | ONSITE | VISA | https://sumup.com

SumUp is a leading card payment hardware company, born in Europe, serving millions of merchants globally. We provide small and medium-sized merchants with the ability to easily accept card using our in-house-built readers and their smartphones. We're a hardware and software company, building financial tools for merchants, allowing them to focus on what they're best at.

We have a modern tech stack, running microservices on Kubernetes. Teams are fully autonomous and cross-functional. We're looking for all levels of engineers interested in writing Golang day-to-day, and helping shape the engineering landscape here at SumUp.

Stack: Go / Postgres / K8s / AWS / Jenkins / Docker

Apply: https://sumup.com/careers/positions/4301814002/

Feel free to ping me: thomas.arrell(at)sumup.com if you want to chat further about the positions!


SumUp | Golang Engineers | Berlin, Germany | ONSITE | VISA | https://sumup.com

SumUp is a leading card payment hardware company, born in Europe, serving millions of merchants globally. We provide small and medium sized merchants with the ability to easily accept card using our in-house-built readers and their smartphones. We're a hardware and software company, building financial tools for merchants, allowing them to focus on what they're best at.

We have a modern tech stack, running microservices on Kubernetes. Teams are fully autonomous and cross-functional. We're looking for all levels of engineers interested in writing Golang day-to-day, and helping shape the engineering landscape here at SumUp.

Stack: Go / Postgres / K8s / AWS / Jenkins / Docker

Apply: https://sumup.com/careers/positions/4301814002/

Feel free to ping me: thomas.arrell(at)sumup.com if you want to chat further about the positions!


Hey HN,

Work is coming along on this project to build a distributed SQL database from scratch, mostly as a reference for newcomers to get an idea about the inner workings.

Looking for contributors who are interested in anything from parsers, disk paging, building out a REPL, defining an IR grammar, implementing consensus and more!

Anyone interested in contributing in these areas is more than welcome!


It takes 5-10 years to write, test and productionize a database or filesystem. If you're planning to invest that kind of time and effort, some suggestions are:

* if you're writing a distributed database, a novel and valuable feature would be to consider the network partition case foremost. For example, design the database from the standpont of a node being down for a month.

* do adequate logging so that an operator can understand what is going right or wrong

* how can terminated nodes automatically be rebuilt efficiently and automatically?

* all configuration settings should be dynamic

Source: experienced DBA, worked with Cassandra, Influxdb and most SQL RDBMSs


> if you're writing a distributed database

is this really a requirement for all distributed databases or only decentralized databases? if I'm using a distributed database and I control all the nodes, if one of them is down I don't plan on spinning it back up (assuming the data is replicated across other nodes). I guess what I'm saying is if I need 20 nodes, I'm going to make sure I always have 20 nodes running.


Yes, this applies to any distributed system.

It's a basic truth about the computer networks that if you send a message to a machine and don't hear back, you don't know whether it's actually dead or just not able to communicate with you. If a machine finds itself in that situation, one option is to simply wait until the peer eventually comes back to life, or retries enough times to eventually get through.

If it takes any other action, it has to make sure that action is "safe" (w.r.t whatever guarantees it's trying to provide) under either scenario. That's all "partition tolerance" really is: a statement that you, as the developer, have a sort of burden of proof to make sure you've considered all the possible failure scenarios. (If that seems like a tautology, well, that's why we usually don't bother talking about the "P" in the CAP theorem.)

As your comment alludes to, one can often simplify the problem a bit by reducing the number of possible scenarios, by assuming that crashed nodes never recover, they're only replaced by new nodes. But that still doesn't make the problem go away. Unless your network (including both the hardware and the OS network stack!) is infallible, you can't reliably know whether a remote machine has crashed in the first place.

Consider the common situation where you want to provide some kind of transactional guarantees; for instance, transactions should appear to complete in causal order, and their effects should not disappear once committed. That implies that even if a node "looks" dead, it's not safe for another one to take over its role unless it's really, truly dead, or you would risk returning stale (transactionally invalid) results.


> design the database from the standpont of a node being down for a month.

It uses Raft, so this should be handled by nature. If you are referring to writes to said node (not that Raft would allow it), you are delving into CAP theorem and what you are suggesting is impossible (unless you don't care about consistency).


Raft is a consensus protocol. It doesn't handle what you actually do during long partitions (stop writes if quorum is lost? Indicate that consistency may be compromised in some other way?) Or how to handle long-lost nodes rejoining (do they rejoin no matter how long they've been gone and service stale reads? Does the cluster "forget" them after some period of time? If so, how do you synchronize the act of forgetting so you don't end up in split brain if only some healthy nodes have forgotten their long-lost brother when it comes back online?)

Raft is not "use this library and your distributed system Just Works". It's a low level building block, like TLS--and nobody says "just use OpenSSL and your app is fully secure".


If it is intended to be a reference project, I suggest focusing on writing the README sections first highlighting the main areas and then linking to code sections to ease the navigation.

You may or may not be aware, but Andy Pavlo records his courses and puts them on YouTube. His latest playlist covers the main database topics with the last five or so lectures covering distributed databases: https://www.youtube.com/playlist?list=PLSE8ODhjZXjbohkNBWQs_...

edit: ^suggesting Pavlo's work since he introduces database concepts very well, so it may be worth structuring the reference architecture in the same way.


How are the goals of this project different than cockroachdb?


Thanks!

Sounds like a good idea. I’m planning to put together a series of posts on my blog going through it step by step once it’s at a good stage.

I’ll also be using it as the basis for a few learning sessions at my work, so will have some more condensed slides for that which will also go up on Github.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: