OpenGPTs Deep Dive and More - This Week in AI

OpenGPTs Deep Dive and More - This Week in AI

OpenGPTs Update: A Deep Dive into LangGraph-Powered Features

Over two months ago, following OpenAI dev day, the team unveiled OpenGPTs, an innovative take on an open-source GPT store. Powered by an early version of LangGraph, an extension of LangChain for building agents as graphs, OpenGPTs has undergone significant enhancements.

LangGraph Integration and Evolution

Initially launched with an early version of LangGraph, OpenGPTs has seen substantial progress. Two weeks ago, the team officially released LangGraph, and over the past weekend, OpenGPTs was updated to fully leverage LangGraph's capabilities while introducing several new features.

MessageGraph: The Heart of OpenGPTs

At the core of OpenGPTs is MessageGraph, a distinct type of graph introduced in LangGraph. This specialized graph operates on the principle of message passing, where each node takes in a list of messages and returns messages to append to the list. The significance of message passing lies in its relevance to new "chat completion" models, alignment with distributed systems' communication methods, ease of visualization, and conceptual extensibility to multi-agent systems.

By adopting MessageGraph, OpenGPTs makes assumptions about input and output formats while remaining agnostic to the cognitive architectures of the agents it creates. This flexibility allows support for a wide array of cognitive architectures.

Cognitive Architectures in OpenGPTs

In the recent update, OpenGPTs introduced three distinct cognitive architectures:

1. Assistants: Equipped with varying tools, Assistants use a Language Model (LLM) to decide when to employ these tools. This architecture provides high flexibility but may need to be more reliable.

2. RAG (Retrieval-Aided GPT): Focused on retrieving information from uploaded files, RAG bots follow a structured approach by retrieving documents and using them in a separate call to the language model.

3. ChatBot: This simple architecture involves a direct call to a language model parameterized by a system message. Despite its simplicity, it allows GPTs to adopt different personas and characters.

Persistence and Configuration

Integrating LangGraph checkpoints addressed persistence, an essential requirement for OpenGPTs. This enables the saving of chat messages using a RedisCheckPointer, offering persistence functionality that will be expanded in future updates.

LangChain's configurability features were harnessed to allow users to choose language models, system messages, tools, and other configurations. Marking fields as configurable and saving configurations ensures users can reproduce their chatbots with specific setups.

New Models and Tools

The OpenGPTs update introduced new models, including Google's Gemini model, and integrated Mixtral via Fireworks for simpler architectures. The agents were updated to use tool calling instead of function calling, enhancing their performance and reliability.

A new tool, Robocorp's Action Server, was introduced. It allows users to define and run arbitrary Python functions as tools, providing a versatile way to incorporate various tools within OpenGPTs.

Astream_events for Streamlined Event Handling

OpenGPTs utilizes the new `astream_events` method to streamline event streaming. This method makes it easier for users to surface new tokens, function calls, and results. It also enhances the user interface by efficiently presenting relevant messages or chunks.

The recent advancements in OpenGPTs underscore its commitment to providing a versatile and powerful open-source GPT store. With LangGraph at its core, the platform offers a robust foundation for creating sophisticated language models and agents with diverse cognitive architectures.

Connery: Empowering Real-World Integration for Language Models with Open-Source Plugins

In a transformative move, developers have leveraged a decade of experience to create Connery, an open-source project addressing the persistent challenges in integrating Language Model (LLM) applications with real-world tasks. This innovation aims to streamline the complexity of integrating LLMs with third-party services, allowing for a more seamless and personalized user experience.

Connery's Mission: Bridging the Gap in LLM Integrations

Over the past ten years, the developers have tackled a variety of integrations, from conventional system integrations to crafting plugins for LLM applications, Continuous Integration/Continuous Deployment (CI/CD) workflows, Slack, and no-code tools. Recognizing the recurrent pain points in these endeavors, they decided to channel their expertise into Connery, an open-source project tailored to make LLM integrations accessible to everyone.

Key Features of Connery

1. Plugin Infrastructure for LLM Applications:

- Connery introduces a specialized plugin infrastructure for LLM applications, ensuring easy integration with third-party services.

- The platform efficiently manages runtime, seamlessly integrates with OpenGPTs, and offers a user interface for connection management, personalization, and safety.

2. Developer-Focused Tooling and Ecosystem:

- Connery is committed to building tooling and enhancing the developer experience for an open-source plugin ecosystem.

- The goal is to foster a collaborative community where developers can create, share, and customize plugins, promoting innovation and speed.

Challenges in Integrating LLMs with Real-World Applications

Some key challenges developers face when integrating LLM-based apps, such as chatbots and assistants, with real-world applications include the need for personalization and security, AI safety and control, and robust infrastructure for integrations.

1. Personalization and Security:

- Connery addresses the need for user authentication, authorization, and a user interface for managing connections and personalization.

- Connection management ensures secure authorization for AI-powered apps to access user services, while personalization features allow users to configure and personalize integrations according to their preferences.

2. AI Safety and Control:

- To tackle the inherent unpredictability of LLM-based apps, Connery introduces measures such as metadata for better action understanding and a human-in-the-loop capability.

- Audit logs are implemented to ensure consistency, compliance, and transparency in LLM app actions.

3. Infrastructure for Integrations:

- Connery recognizes the complexity developers face when building integration infrastructure for LLM-powered apps.

- Developers currently need to build custom integration infrastructure within their apps, including authorization for third-party services and support for different integration types and patterns.

The developers advocate for an open-source plugin infrastructure for LLM apps with the following characteristics:

1. It must be open-source.

2. It must have a collaboration model.

Client Perspectives: Developers and End-Users

Developers: Enjoy flexibility in creating or utilizing community plugins, which can be easily integrated into LLM apps through Connery clients.

End-Users: Personalize their experience on the Runner by connecting to personal accounts, authorizing LLM apps, and triggering actions. This process lets users control the app's actions and have the final say when needed.

The recent updates to LangChain's OpenGPTs showcase support for different cognitive architectures. The new 'assistants' feature enables the integration of tools, such as Connery actions, into custom GPTs. An illustrative example involves summarizing a webpage and sending it via email, utilizing actions from the Summarization and Gmail plugins.

Connery emerges as a pivotal solution for enhancing the integration of LLM applications with real-world tasks. Its open-source nature, collaboration model, and emphasis on an ecosystem of plugins foster innovation and efficiency in developing LLM-powered applications.

Download Example (1000 Synthetic Data) for testing

Click here to download csv

Signup for Our Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Request for Trail

Start Trial

Rahul Sharma

Content Writer

Rahul Sharma graduated from Delhi University with a bachelor’s degree in computer science and is a highly experienced & professional technical writer who has been a part of the technology industry, specifically creating content for tech companies for the last 12 years.

Know More about author

Prevent millions of $ of privacy risks. Learn how.

We take privacy seriously.  While we promise not to sell your personal data, we may send product and company updates periodically. You can opt-out or make changes to our communication updates at any time.