- The agent has a tool to set it's task to 'completed', 'failed', or 'needs_help', with the last one being a option for human in the loop scenarios. Sometimes the agent gets lazy and says it needs help prematurely.
- Additionally, the agent can create subtasks for itself, either to run immediately, or to schedule in the future. Here it again can call that tool a bit too eagerly, filling duplicate subtasks for a task that involves repetitive work.
- Properly handling super long running tasks, that run for 1+ hours. The context window eventually hits it's limit (this will be addressed this week)
Aside from those top of mind issues, there's a whole bunch of scaffolding issues - filesystem permissions, prompt injection security, i/o support, token cost - lot's to improve!
We're still super early, but already these agents are showing flashes of brilliance, and we're gaining more and more conviction that this is the right form factor
A “flash” of anything is also called a fluke, or a coincidence. The dumbest moron can have a flash of brilliance on occasion. So could a random word masher. Consistency is what matters.
> and we're gaining more and more conviction that this is the right form factor
Are we? Who’s “we”? Because it looks to me like the LLM approach is lacklustre if you care about truth and correctness (which you should) but the people and companies invested don’t really have a better idea and are shoving them down everyone’s throats in pursuit of personal profit.
Agreed, and the consistency has improved over time. I remember only a 9 months ago struggling to get a browser agent to accurately click on a checkbox. The growth trajectory is what has us excited.
> We're still super early, but already these agents are showing flashes of brilliance, and we're gaining more and more conviction that this is the right form factor
Slow down cowboy; we're seeing "flashes of brilliance" and "that this is the right form factor" for writing code only!
I'm still waiting for AI/LLM's to be posing a danger to jobs other than those in software development and the arts.
> This one isn't for coding, they mention in the post that coding agents thrive in custom tool-use environments.
Well, that is why I am skeptical and said
>> I'm still waiting for AI/LLM's to be posing a danger to jobs other than those in software development and the arts.
The goal of this product is admirable but, I feel, lacks some grounding: doing screenshots, then converting those images to text, then processing, then converting that to actions, then converting the actions to input events ... results in 4 separate points of failure. So many points of failure each with a success rate (last I checked) of <90% gives you something stupid like an eventual success rate of 0.9 * 0.9 * 0.9 * 0.9 = 0.66.
The same iterative workflow for software development is pretty much 2 steps: process input, then produce output, with 100% success (or close to it for "output", as it's just rewriting the files according to the processing) and 90% for processing which is why it appears to work so well[1].
I dabbled briefly in this and explored a few different ways of making LLMs use the ERP/business system effectively, and with all the current popular business systems, this is simply not possible with a high enough success rate because those systems have few "structured text" output, and even fewer "structured text" input. In fact, some of them have exactly zero "structured text" input.
To make the most of LLMs in your business system, you're going to need a new one that is primarily text-IO based (structured text, if necessary) and only secondarily GUI-for-humans based.
[1] In truth, using tools is a poor way to extend the reach and grasp of the LLM into the operator's context.
It works well for one mainstream use-case: software development, because then you need less than a dozen tools to automate an entire development iteration (read file, list files, insert into file, run test command, etc).
Try doing that with a mini-ERP type of system; there's just no way to keep a small set of 12 tools that can do any workflow that the operator can do. You'll quickly run into a situation where every prompt request includes tool description for about 500 tool calls.
Agentic automation is working very well for coding, where all the input is structured text, all the output is structured text, and all the changes are structured text.
The only way for ERP, Accounting, etc to ever get to this level of agent-based automation is if the base product itself is completely 100% structured text IO based, with the human-operator interface built on top of that.
I respectfully disagree! There's a lot of opportunity behind keyboard + mouse + screen.
In a way Bytebot is a maximalist bet on the growth and improvement of multi-modal LLMs. I firmly believe that in a short period of time, the token cost will drop, while the capability increases (both dramatically). It's still uncertain, which makes it a great asymmetric bet.
We don't do any sort grounding or image conversion, and we offer a handful of tools. I'll go into more detail in my next post.
See my comprehensive reply downthread (it's very long, you cannot miss it).
While I am skeptical due to already having explored this for SMME Line of Business applications, I wish you all the best of luck.
My approach is to simply build a new system from the ground up that can take advantage of structured IO.
[EDIT: send me a message with a link to a post about your product (or this blog), I'll connect with you on linked-in and share your post with my network, meager though it may be]
$450 renewal fee, $300 annual travel credit, so the card costs $150 per year.
With points valued at $0.015, you need to earn 10,000 points to break even (150/0.015).
You get 3 points for every $1 spent on food/travel, so you need to spend $3,333/year on those categories to break even. Personally I spend way more, so the card is definitely worth it to me.
It seems you did not read the comments? You are comparing against cash. Nobody claimed cash was better. The entire discussion was in response to the comment [1] that "Chase Sapphire Reserve is easily the best card I own and I have an Amex Platinum", hence we were comparing against other credit cards (not against cash/debit/etc.), and as an example I explicitly compared CSR to the Uber card for you. Read the above discussion.
I think his math is right but that still equates to 9.5$ a day on food and travel just to break even which is a lot.
I got the uber credit card which i think is a lot better since there is no fee and it is straight cash back (point value can change any time they want).
1. It depends, you're able to toggle whether the image can be viewed once or multiple times.
2. We think viewing a portion of the image makes it interesting. You can treat an image as a scavenger hunt, and hide clues within it. It also discourages screenshots!
You can already enter a drop-off location. It might be that Uber needs to make this feature more pronounced in the UI.
On the confirmation screen, you can tap a little plus button next to the pickup address to enter your drop-off location. It's come in handy for me a few times.
- The agent has a tool to set it's task to 'completed', 'failed', or 'needs_help', with the last one being a option for human in the loop scenarios. Sometimes the agent gets lazy and says it needs help prematurely.
- Additionally, the agent can create subtasks for itself, either to run immediately, or to schedule in the future. Here it again can call that tool a bit too eagerly, filling duplicate subtasks for a task that involves repetitive work.
- Properly handling super long running tasks, that run for 1+ hours. The context window eventually hits it's limit (this will be addressed this week)
Aside from those top of mind issues, there's a whole bunch of scaffolding issues - filesystem permissions, prompt injection security, i/o support, token cost - lot's to improve!
We're still super early, but already these agents are showing flashes of brilliance, and we're gaining more and more conviction that this is the right form factor