Hacker Newsnew | past | comments | ask | show | jobs | submit | mmacedo_en's commentslogin

It's mobile first to input expenses and view spending limits anytime anywhere. I usually input expenses right after paying for them.

I'm using Flutter, which also builds apps for Web, Windows and Linux using the same codebase. My plan is to make Pleke available on all platforms in the future.


Ah ok that makes sense, thanks!


I've been working on Pleke for the last 18 months and using it daily for the last 6 months.

Would love some feedback!


For twenty years I tried to build products and sell them to others, to no avail. "Marketing is the problem" - I always told myself about my failures.

I still have one product up from those days. FileSculptor is a file converter used by non developers to automate CSV to XLSX conversion. What was the last time I used it myself? Excluding testing before new releases or support issues, never.

I decided that my next product should be something I really care about. Something that scratches my on itch and that I would use on a daily basis. I should be the first one to find bugs on my product, not my users.

I always kept track of my expenses, but was less so with my budget. I used a spreadsheet and would update if once or twice a month. So I decided to create an app with the budget and expense tracking I wanted. I always had lots of ideas of how it should work, how it would forecast transactions automatically from a detailed budget and how it would deal with credit card purchases, hitting the budget on the month they'd appear of the credit card statement. It would support credit card purchases with installments, something always missing in the apps I tried.

I created Pleke to be the personal finance app I always wanted, and use it on a daily basis. For the last six months its been the only source of truth about my finances. At my day job I work on a product used by others. I'm grateful and want it to work well because it pays my bills. But with my app, its quite a different relationship. I'm always proud when I input data in it and it works as designed. When something breaks, I want to fix it ASAP. I have a text file with planned features for version 2, version 3, 4 and 5.

Pleke is the app I want to work on for the next ten years, even if others do not value it as much as I do. "Marketing is the problem" - I will tell myself, to their loss.

Be the first user of your product! A person who takes good care of their garden will work on it and admire it on a daily basis. So should we with our products.


Hey everybody,

I've been working on Pleke for the last 18 months and using it daily for the last 6 months.

It is based on cashflow, so the budget is affected when money enters or leaves the accounts. Credit card purchases affect the budget on the month they appear on the credit card bill. It even shows the credit card bill based on transactions completed, planned or estimated.

I use cash flow projection to predict the future balance of accounts, to warn about lack of funds and avoid overdraft fees.

It's very easy to reconcile as it has a running balance. Input is manual but very fast due to autocomplete and after input I show how the budget line item was affected.

Budget works like envelope budget, with detailed budget items. This way I can create transactions from budget, like car payment. With a detailed budget it can understand better which limits can be moved to cover overspending (for example in transportation can reduce gas limit but can not not reduce car payment).

Pleke has a local database with cloud backup, you can view and enter transactions when offline. There are apps for iPhone and Android.

Would love any feedback!


HN is missing an user follow and user tagging button :-)


I wish HN published the user comments/subscriptions pages as RSS.

For example, I "follow" a few users on StackOverflow by adding the feed that SO makes available linked from their profile. Super useful.


If that's for me, I really appreciate that :)

There's obviously that option on Twitter (same username as here), if you want to follow there ;)


Did it already :-) Good idea putting twitter link on your profile


I've seen queries running for 1 minute 2 minutes raising user complaints. Then we looked at it, and with a few changes in indexes and query hints brought it down to sub-second execution.

Before thinking about distributed systems, there is an entire database optimization toolkit to make use of: primary key review, secondary index creation, profiling, view or stored procedure creation, temporary tables, memory tables and so on.


Indexes are not free, they take up space and they make mutations more costly. Also, building the index may not even be possible while your application is running, because postgresql and other RDBMS have inadequate facilities for throttling index construction such that it doesn't harm the online workload. You might have to build indexes at midnight on Sundays, or even take your whole system offline. It can be a nightmare.

This isn't just a problem for SQL databases. Terrible-but-popular NoSQL systems like MongoDB also rely heavily on indexes while providing zero or few safety features that will prevent the index build from wrecking the online workload.

I personally prefer databases that simply do not have indexes, like bigtable, because they require more forethought from data and application architects, leading to fundamentally better systems.


I've seen https://www.postgresql.org/docs/12/sql-createindex.html#SQL-... work well in practice on busy transactional databases. I'd be interested in knowing about cases where it doesn't work well.

My experience on systems without indexes differs strongly from yours. Yes, they can work well. But if you have multiple use cases for how your data is being queried, they push you into keeping multiple copies of your data. And then it is very, very easy to lose consistency. And yes, I know about "eventual consistency" and all that - I've found that in practice it is a nightmare of special cases that winds up nowhere good.


Just from personal experience, if they can build gmail on top of a database (bigtable) having neither indexes nor consistency, then probably it will also be suitable for the purposes of my far smaller, much less demanding products.

On the other hand I've seen, and am currently suffering through, products that have desperate performance problems with trivial amounts (tens of GB) of data in relational databases with indexes aplenty.


That's not exactly true, bigtable still has some indexes. You're just restricted to one per table and only on the row key. Instead you spend time finding the right row keys for each value that you're going to want to look up. Picking bad keys will still lead you to have terrible performance, since you'll be looking up the wrong thing to get what you actually want.


This. Using any database requires building up an expertise and understanding how to use its capabilities properly. If you hit a wall with a non-distributed database and your solution is to replace it with a distributed one - you will have a bad time. The surface area of what you need to know to use it properly still includes your basic data modeling, indices, troubleshooting that was required before, with a whole lot of networking, consensus protocols and consistency models to worry about (just to name a few).


> Then we looked at it, and with a few changes in indexes and query hints brought it down to sub-second execution.

This is exactly what the parent comment said: "you should try to build your application so queries and transactions are very short".

If you're claiming that the parent is incorrect about "sometimes, the only answer is to distribute the queries across other boxes", my guess is that probably don't work at a scale where you've learned that query optimization can't solve every database performance problem.


Based on my past experiences, I'd be happy to take an even money bet that more than 95% of organizations that go distributed for performance reasons actually caused themselves more problems than they solved.

This does NOT mean that there are no use cases for distributed - I've seen Google's internals and it would be impossible to do that any other way. But it does mean that when someone is telling you with a straight face that they needed horizontal scalability for performance, you should assume that they were probably wrong. (Though probably saying that is not the wisest thing - particularly if the person you're talking to is the architect whose beautiful diagrams you'd be criticizing.)

So yes, there are problems that require a distributed architecture for performance problems. That doesn't contradict the point that every other option should be explored first.


> Based on my past experiences, I'd be happy to take an even money bet that more than 95% of organizations that go distributed for performance reasons actually caused themselves more problems than they solved.

Oh, I'm absolutely positive you're right! And as you know, even in distributed architectures there's a huge range of solutions from "let's do these n reasonably simple things" to "let's just rebuild our system to solve every conceivable future problem". I don't know of a scenario where the latter has ever worked.


I'm referring to parent's point "Sometimes you just have to do joins across large tables.". Indexes and query hints can make a world of difference once you profile how the query is being executed.

Another option is to create a stored procedure which first extracts a smaller amount of data from first table to a temporary table and then join it to the other large table.

Measuring and testing different optimization methods takes a lot of effort and time. We should only do that when faced with a real problem of slowness.

"The first rule of optimization is: Don't do it. The second rule of optimization (for experts only) is: Don't do it yet. Measure twice, optimize once."


I'm building an app using Postgres for the first time. Naturally I was a bit worried about performance and scaling if the not launched yet app becomes a major success.

I began simulating a heavy use scenario. 100k users creating 10 records daily for three years straight.

100000 x 10 x 365 x 3 ~= 1 billion rows or about 200 GB with a record's length of 200 bytes. This is peanuts for modern databases and hardware.

Seems like a single node can support it for a long way before I have to worry about performance...


You classifying 11 writes per second as "heavy use" reminds me of how people on average completely underestimate how fast computers actually are (when they're not bogged down by crappy programs).


I don't believe the grandparent's simulation actually took three years, it was likely operations with a particular data size that was tested.

Still, your main point stands. Around 2001 I wrote a C-program to record every file and size on a large hard disk. We were all amazed that it finished (seemingly) before the enter key had come back up. Must be a bug somewhere, right? Nope.

Much earlier I wrote a Pascal program on a 486 in school that did some calculations over and over again, writing the output to the screen. It blew my mind then how fast the computer could do it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: