Spotlight Sale: Save 50% on select FX now through September 30.

Local offline chat AI

House of BigOrange - Code Plugins - Sep 26, 2024
Not Yet Rated

This is a plug-in that can deploy the local large language model offline, and can achieve text generation, function calls, and RAG functions in the game without API calls.

  • Supported Platforms
  • Supported Engine Versions
    5.2 - 5.4
  • Download Type
    Engine Plugin
    This product contains a code plugin, complete with pre-built binaries and all its source code that integrates with Unreal Engine, which can be installed to an engine version of your choice then enabled on a per-project basis.

This is a plug-in that runs large language models locally on mid - and high-end personal computers, without any external services. It integrates the llamacpp open source library and can efficiently use the CPU for model inference.

It can be applied in the digital person project, can also be applied to the AI NPC of the game project, and supports the custom character background information, and supports the retrieval and generation of knowledge base. Imagine that each character is an AI NPC with its own unique story and knowledge base, and there are an infinite number of possible ways for players to talk.

The plugin uses the Qwen2.5-1.5B open source large model, and you can choose other models with higher or lower parameters depending on the hardware configuration.


Document address on OneDrive:

Local Chat AI User Guide.pdf

Document address on GoogleDrive:

https://drive.google.com/file/d/1EvoBAh-2wbxgKyIaZ18XXbicJjDy1gjQ/view?usp=drive_link

Technical Details

Features:

  •  Chat AI
  •  Offline
  •  RAG(retrieval-augmented generation)
  • Function calls

Code Modules:

  •  Name:LocalOfflineAI,Type: Runtime
  •  Name:LocalOfflineAILibrary,Type: Runtime

Number of Blueprints:3 ,Number of C++ Classes:2

Supported Development Platforms: ["Win64"]

Supported Target Build Platforms: ["Win64"]

Important Notes:

Minimum requirements for plug-in configuration: 32GB of memory, Nvidia 20 series or above graphics card, and more than 8GB of video Random Access memory.

The plug-in model file and knowledge base file cannot be packaged out after the project using the plug-in is packaged. You need to manually copy the model file and knowledge base file to the corresponding location in the plug-in directory of the packaged project file.

The plugin comes with the open source free large model Qwen2.5 by default, and you can also download other models.