Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nice!

Totally agree with the project goals, it seems too many other packages are created by people who are researchers (or enthusiasts) first and software developers second, and it shows.

I see you're using Pydantic. I've recently been playing with using pydantic to implement chatgpt functions, making it a bit easier to define functions (tools) with more control over the attributes, like this:

    class SearchWeb(pydantic.BaseModel):
        """
        Docstring description to help GPT figure out what this does, like functions in your library.
        """
        query: str = pydantic.Field(description="More info so GPT understands how to use this param")

    def handle(self):
        # my wrapper will call this to implement the tool after the arguments are parsed
        # at this point you can be sure self.query is correct and has passed any validation you might have

It's definitely more verbose than the function definitions you have now, but you get schema definition for free, and is more strict about option parsing. It also makes it easy to throw errors back at GPT if it hallucinated some parameters incorrectly.

...aaaanyways, great work there, I'll be following the progress!



There was another post today about using Pydantic for function enabled completions: https://github.com/jxnl/openai_function_call

I whipped up an example doing something similar last Friday using a decorator, inspect, ast and __doc__ usage: https://gist.github.com/kordless/7d306b0646bf0b56c44ebca2b8e.... The example pulls top results from Algoia's HN search and then chains them into another prompt for GPT-X. The blog post is here: https://www.featurebase.com/blog/function-integration-in-ope...

Currently integrating this approach into PythonGPT[1], which will build a function on the fly, extract the method info, then call the code in exec(). I would label it "very dangerous"...

[1] https://github.com/FeatureBaseDB/PythonGPT


> There was another post today about using Pydantic for function enabled completions

That post let me know that pydantic's schema() function actually worked to produce a valid JSON schema, so I was able to optimize from there. (there may be a few optimizations still to be done: schema() also returns an unnecessary title field and I need to experiment if I need to remove it)


Ah, I see I've been ninja'd by about 17 hours: https://github.com/minimaxir/simpleaichat/releases/tag/v0.2.... :-D


I've spent a lot of time experimenting with schema: https://github.com/minimaxir/simpleaichat/blob/main/examples...


As a researcher what would the number one sin be that we commit? Is it something I could make the effort to improve upon?


Not a sin, I don't want to diminish in any way the awesome work that's being done!

It's just that these packages/libraries/frameworks are often in what I'd describe as "proof of concept" phase, not very developer-friendly (as in user-friendly for people wanting to try it out), missing docs, not handling errors, and not being written in a maintainable way.

So I think a "second generation" of tools/libraries, that are basically product-level quality, bringing on your work and focusing on those non-core-tech aspects of the experience, will be a next step to bring AI to the (developer) masses. Tools such as this package.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: