Generative AI Gateway Pitch to ORI

Generative AI Gateway writes:

Hi Michelle,

I’m the CEO of - a Generative AI Gateway application built on enterprise best practices.

software is complementary to any existing Gen AI stack. Simplifies - LLM abstraction, model routing & chaining Secures - configurable model, API policy and security (OAuth/RBAC) Eases Integration - enterprise-grade versioned API with code-generated SDK

Roli is excellent for prototyping in development, as well as structuring production release management.

Let us show you how we can remove entire epics of development from your timeline.

Would you be interested to discuss further?

Sure,

What do you have that would be accessible to a non-profit R&D firm?

-Michelle

Our 'build' tier is $500 a month, usage fee's may not amount to much as you're likely not opening usage or consumption to the greater market. We could look at that together to be sure to contain costs as you're a non-profit. For that $500, you're buying 1) flexibility to be sure you aren't getting $$$ model call fee's when there are countless alternative options. Roli decoupling provides the agility you'll want to swap out models without having to rewrite services / code. 2) ability to build compound (and manage), chained model systems extremely easily. 3) a place to configure access and data policies to keep your house in order (Risk mitigation). Hi Michelle, may I ask your thoughts? With another client we're developing a per call fee to reflect their own model, maybe that's fitting for you as well. Let me know what you think. Since you asked, I will share my thoughts.

For $6000 a year, I can fund a graduate student to do focused and competent mutually beneficial work for the engineering R&D that we do. The student gets valuable training, we get work done, and we build a relationship for the future. Multiple people are educated and employed.

LLMs aren’t going to do anything for us, in our mission to serve humans doing science, except to deliver obfuscated search results scraped from other humans without their consent. Stolen work is not something we can use without harsh long term repercussions. Sadly, we are now fully cognizant that we are all working hard to make the novel discoveries that LLMs will scrape and jumble up and present as “anonymous” work. This is not what open source software and hardware is about, but it is what is rapidly becoming. The open source licenses that the LLMs ignored are being actively violated, and this will result in people halting published work.

Our experience so far with LLMs is that search results are not explained, they are not cited correctly, they are generally not accurate enough at all to use in R&D, and are not useful to guide us better than google scholar or writing paper authors directly.

LLMs have their place, but proposing a six grand a year fleece for vague arm waving on LinkedIn was a huge turn off.

You literally cannot guarantee results from LLMs. You can’t ensure that they are consistent, that there’s any sort of chain of custody, and you can’t say that the answers won’t change over time. This is inherent in the technology, in the math, and in the theory.

There’s nothing new under the sun, and this round of automation is no different than all of the ones that have come before it, which have left ordinary human workers out in the cold. This time automation is coming for knowledge workers, and they will be treated just like the factory workers before them. This process will contribute towards an already bad wealth gap and pay disparity. Underrepresented people like myself, largely missing from LLMs, will suffer the most.

Those are my thoughts based on the little you have revealed.

-Michelle Thompson ORI CEO

Open Research Institute @OpenResearchIns