Comprehensive guide to managing assistant configurations in the View Assistant platform for RAG and chat-only assistants.
Overview
Assistant configurations provide comprehensive settings management for AI assistants within the View Assistant platform. They enable creation and management of both RAG (Retrieval Augmented Generation) and chat-only assistants with customizable settings, vector database configurations, and generation parameters.
Assistant configuration operations are accessible via the View Assistant API at [http|https]://[hostname]:[port]/v1.0/tenants/[tenant-guid]/assistant
and support full CRUD operations for configuration management.
API Endpoints
- GET
/v1.0/tenants/[tenant-guid]/assistant/configs
- Retrieve all assistant configurations - POST
/v1.0/tenants/[tenant-guid]/assistant/configs
- Create new assistant configuration - GET
/v1.0/tenants/[tenant-guid]/assistant/configs/[config-guid]
- Retrieve specific assistant configuration - PUT
/v1.0/tenants/[tenant-guid]/assistant/configs/[config-guid]
- Update assistant configuration - DELETE
/v1.0/tenants/[tenant-guid]/assistant/configs/[config-guid]
- Delete assistant configuration - HEAD
/v1.0/tenants/[tenant-guid]/assistant/configs/[config-guid]
- Check configuration existence
Assistant Configuration Object Structure
{
"GUID": "8d48511f-dc01-4b11-9eb7-f5bad86c482c",
"Name": "botox",
"Description": null,
"SystemPrompt": "You are an AI assistant augmented with a retrieval system. Carefully analyze the provided pieces of context and the user query at the end. \n\nRely primarily on the provided context for your response. If the context is not enough for you to answer the question, please politely explain that you do not have enough relevant information to answer. \n\nDo not try to make up an answer. Do not attempt to answer using general knowledge.",
"EmbeddingModel": "sentence-transformers/all-MiniLM-L6-v2",
"MaxResults": 10,
"VectorDatabaseName": "vectordb",
"VectorDatabaseTable": "view-73871c0b-00cd-44f0-be5f-f5612523d8c5",
"VectorDatabaseHostname": "pgvector",
"VectorDatabasePort": 5432,
"VectorDatabaseUser": "postgres",
"VectorDatabasePassword": "password",
"GenerationProvider": "ollama",
"GenerationModel": "qwen2.5:latest",
"GenerationApiKey": "",
"Temperature": 0.1,
"TopP": 0.95,
"MaxTokens": 4000,
"OllamaHostname": "192.168.86.250",
"OllamaPort": 11434,
"OllamaContextLength": null,
"ContextSort": true,
"SortBySimilarity": true,
"ContextScope": 5,
"Rerank": true,
"RerankModel": "cross-encoder/ms-marco-MiniLM-L-6-v2",
"RerankTopK": 5,
"RerankLevel": "Chunk",
"TimestampEnabled": true,
"TimestampFormat": "Date",
"TimestampTimezone": "UTC",
"UseCitations": false,
"CreatedUTC": "2025-04-11T19:05:56.694880",
"LastModifiedUTC": "2025-04-11T19:05:56.694880",
"ChatOnly": false
}
Field Descriptions
- GUID (string): Unique identifier for the configuration
- Name (string): Name of the assistant
- Description (string|null): Optional description of the assistant
- SystemPrompt (string): Initial system message that defines assistant behavior
- EmbeddingModel (string): Model used for embedding (e.g., "sentence-transformers/all-MiniLM-L6-v2")
- MaxResults (number): Maximum number of results to retrieve from the vector database
- VectorDatabaseName (string): Name of the vector database
- VectorDatabaseTable (string): Table/view name used for querying vectors
- VectorDatabaseHostname (string): Hostname of the vector database
- VectorDatabasePort (number): Port number of the vector database
- VectorDatabaseUser (string): Username used to connect to the vector database
- VectorDatabasePassword (string): Password for the vector database
- GenerationProvider (string): Provider used for text generation (e.g., "ollama")
- GenerationModel (string): Model used for generating text (e.g., "qwen2.5:latest")
- GenerationApiKey (string): API key used for the generation provider
- Temperature (number): Controls randomness in generation (higher = more random)
- TopP (number): Nucleus sampling threshold for token generation
- MaxTokens (number): Maximum number of tokens allowed in the response
- OllamaHostname (string): Hostname of the Ollama generation server
- OllamaPort (number): Port of the Ollama generation server
- OllamaContextLength (number|null): Max context window for Ollama (null for default)
- ContextSort (boolean): Whether to sort context entries
- SortBySimilarity (boolean): Whether to sort context entries by vector similarity
- ContextScope (number): Number of top context entries to include
- Rerank (boolean): Whether to apply a reranker model
- RerankModel (string): Model used for reranking results
- RerankTopK (number): Top K results to pass through reranker
- RerankLevel (string): Reranking granularity level (e.g., "Chunk")
- TimestampEnabled (boolean): Whether to include timestamps in output
- TimestampFormat (string): Format for timestamp (e.g., "Date", "ISO")
- TimestampTimezone (string): Timezone for timestamp formatting (e.g., "UTC")
- UseCitations (boolean): Whether to include citations in generated output
- CreatedUTC (datetime): Timestamp of creation in UTC
- LastModifiedUTC (datetime): Last modified timestamp in UTC
- ChatOnly (boolean): Whether this assistant is restricted to chat-only interactions
Create RAG config
To create, callPOST /v1.0/tenants/[tenant-guid]/assistant/configs
curl --location 'http://view.homedns.org:8000/v1.0/tenants/00000000-0000-0000-0000-000000000000/assistant/configs' \
--header 'Content-Type: application/json' \
--data '{
"Name": "Basic RAG Assistant",
"Description": "uses qwen2.5:7b - ollama",
"SystemPrompt": "Use the provided context to answer questions.",
"EmbeddingModel": "sentence-transformers/all-MiniLM-L6-v2",
"MaxResults": 10,
"VectorDatabaseName": "vectordb",
"VectorDatabaseTable": "minilm",
"VectorDatabaseHostname": "pgvector",
"VectorDatabasePort": 5432,
"VectorDatabaseUser": "postgres",
"VectorDatabasePassword": "password",
"GenerationProvider": "ollama",
"GenerationApiKey": "",
"GenerationModel": "qwen2.5:7b",
"HuggingFaceApiKey": "",
"Temperature": 0.1,
"TopP": 0.95,
"MaxTokens": 500,
"OllamaHostname": "ollama",
"OllamaPort": 11434,
"ContextSort": true,
"SortByMaxSimilarity": true,
"ContextScope": 0,
"Rerank": true,
"RerankModel": "cross-encoder/ms-marco-MiniLM-L-6-v2",
"RerankTopK": 3
}'
import { ViewAssistantSdk } from "view-sdk";
const assistant = new ViewAssistantSdk(
"http://localhost:8000/", //endpoint
"<tenant-guid>", //tenant Id
"default" //access token
);
const createAssistant = async () => {
try {
const response = await api.AssistantConfig.create({
Name: "Basic RAG Assistant",
Description: "uses qwen2.5:7b - ollama",
SystemPrompt: "Use the provided context to answer questions.",
EmbeddingModel: "sentence-transformers/all-MiniLM-L6-v2",
MaxResults: 10,
VectorDatabaseName: "vectordb",
VectorDatabaseTable: "minilm",
VectorDatabaseHostname: "pgvector",
VectorDatabasePort: 5432,
VectorDatabaseUser: "postgres",
VectorDatabasePassword: "password",
GenerationProvider: "ollama",
GenerationApiKey: "",
GenerationModel: "qwen2.5:7b",
HuggingFaceApiKey: "",
Temperature: 0.1,
TopP: 0.95,
MaxTokens: 500,
OllamaHostname: "ollama",
OllamaPort: 11434,
ContextSort: true,
SortByMaxSimilarity: true,
ContextScope: 0,
Rerank: true,
RerankModel: "cross-encoder/ms-marco-MiniLM-L-6-v2",
RerankTopK: 3,
});
console.log(response);
} catch (err) {
console.log("Error creating assistant:", err);
}
};
createAssistant();
import view_sdk
from view_sdk import assistant
from view_sdk.sdk_configuration import Service
sdk = view_sdk.configure(
access_key="default",
base_url="localhost",
tenant_guid="tenant-guid",
service_ports={Service.ASSISTANT: 8000},
)
def createConfig():
result = assistant.Config.create(
Name= "Basic RAG Assistant",
Description= "uses qwen2.5:7b - ollama",
SystemPrompt= "Use the provided context to answer questions.",
EmbeddingModel= "sentence-transformers/all-MiniLM-L6-v2",
MaxResults= 10,
VectorDatabaseName= "vectordb",
VectorDatabaseTable= "minilm",
VectorDatabaseHostname= "pgvector",
VectorDatabasePort= 5432,
VectorDatabaseUser= "postgres",
VectorDatabasePassword= "password",
GenerationProvider= "ollama",
GenerationApiKey= "",
GenerationModel= "qwen2.5:7b",
HuggingFaceApiKey= "",
Temperature= 0.1,
TopP= 0.95,
MaxTokens= 500,
OllamaHostname= "ollama",
OllamaPort= 11434,
ContextSort= True,
SortByMaxSimilarity= True,
ContextScope= 0,
Rerank= True,
RerankModel= "cross-encoder/ms-marco-MiniLM-L-6-v2",
RerankTopK= 3
)
print(result)
createConfig()
using View.Sdk;
using View.Sdk.Assistant;
ViewAssistantSdk sdk = new ViewAssistantSdk(Guid.Parse("<tenant-guid>"),"default", "http://localhost:8000/");
AssistantConfig config = new AssistantConfig
{
Name = "Basic RAG Assistant",
Description = "uses qwen2.5:7b - ollama",
SystemPrompt = "Use the provided context to answer questions.",
EmbeddingModel = "sentence-transformers/all-MiniLM-L6-v2",
MaxResults = 10,
VectorDatabaseName = "vectordb",
VectorDatabaseTable = "minilm",
VectorDatabaseHostname = "pgvector",
VectorDatabasePort = 5432,
VectorDatabaseUser = "postgres",
VectorDatabasePassword = "password",
GenerationProvider = "ollama",
GenerationApiKey = "",
GenerationModel = "qwen2.5:7b",
HuggingFaceApiKey = "",
Temperature = 0.1,
TopP = 0.95,
MaxTokens = 500,
OllamaHostname = "ollama",
OllamaPort = 11434,
ContextSort = true,
SortByMaxSimilarity = true,
ContextScope = 0,
Rerank = true,
RerankModel = "cross-encoder/ms-marco-MiniLM-L-6-v2",
RerankTopK = 3
};
AssistantConfiguration response = await sdk.Config.CreateRag(config));
Create chat only config
curl --location 'http://view.homedns.org:8000/v1.0/tenants/00000000-0000-0000-0000-000000000000/assistant/configs' \
--header 'Content-Type: application/json' \
--data '{
"Name": "Chat Only Blackbeard Assistant",
"Description": "uses qwen2.5:7b - ollama",
"SystemPrompt": "You are Edward Teach (c. 1680 – 22 November 1718), better known as the pirate Blackbeard. Talk like a pirate and only answer questions with period correct answers.",
"GenerationProvider": "ollama",
"GenerationApiKey": "",
"GenerationModel": "qwen2.5:7b",
"Temperature": 0.1,
"TopP": 0.95,
"MaxTokens": 500,
"OllamaHostname": "ollama",
"OllamaPort": 11434,
"ChatOnly": true
}'
import { ViewAssistantSdk } from "view-sdk";
const assistant = new ViewAssistantSdk(
"http://localhost:8000/", //endpoint
"<tenant-guid>", //tenant Id
"default" //access token
);
const createAssistant = async () => {
try {
const response = await api.AssistantConfig.create({
Name: "Chat Only Blackbeard Assistant",
Description: "uses qwen2.5:7b - ollama",
SystemPrompt:
"You are Edward Teach (c. 1680 – 22 November 1718), better known as the pirate Blackbeard. Talk like a pirate and only answer questions with period correct answers.",
GenerationProvider: "ollama",
GenerationApiKey: "",
GenerationModel: "qwen2.5:7b",
Temperature: 0.1,
TopP: 0.95,
MaxTokens: 500,
OllamaHostname: "ollama",
OllamaPort: 11434,
ChatOnly: true,
});
console.log(response);
} catch (err) {
console.log("Error creating assistant:", err);
}
};
// createAssistant();
import view_sdk
from view_sdk import assistant
from view_sdk.sdk_configuration import Service
sdk = view_sdk.configure(
access_key="default",
base_url="localhost",
tenant_guid="tenant-guid",
service_ports={Service.ASSISTANT: 8000},
)
def createConfig():
result = assistant.Config.create(
Name= "Chat Only Blackbeard Assistant",
Description= "uses qwen2.5:7b - ollama",
SystemPrompt= "You are Edward Teach (c. 1680 – 22 November 1718), better known as the pirate Blackbeard. Talk like a pirate and only answer questions with period correct answers.",
GenerationProvider= "ollama",
GenerationApiKey= "",
GenerationModel= "qwen2.5:7b",
Temperature= 0.1,
TopP= 0.95,
MaxTokens= 500,
OllamaHostname= "ollama",
OllamaPort= 11434,
ChatOnly= True
)
print(result)
createConfig()
using View.Sdk;
using View.Sdk.Assistant;
ViewAssistantSdk sdk = new ViewAssistantSdk(Guid.Parse("<tenant-guid>"),"default", "http://localhost:8000/");
AssistantConfig config = new AssistantConfig
{
Name = "Chat Only Blackbeard Assistant",
Description = "uses qwen2.5:7b - ollama",
SystemPrompt = "You are Edward Teach (c. 1680 – 22 November 1718), better known as the pirate Blackbeard. Talk like a pirate and only answer questions with period correct answers.",
GenerationProvider = "ollama",
GenerationApiKey = "",
GenerationModel = "qwen2.5:7b",
Temperature = 0.1,
TopP = 0.95,
MaxTokens = 500,
OllamaHostname = "ollama",
OllamaPort = 11434,
ChatOnly = true
};
AssistantConfiguration response = await sdk.Config.CreateRag(config));
Read
To read config by GUID, call GET /v1.0/tenants/[tenant-guid]/assistant/configs/[config-guid]
curl --location 'http://view.homedns.org:8000/v1.0/tenants/00000000-0000-0000-0000-000000000000/assistant/configs/8d48511f-dc01-4b11-9eb7-f5bad86c482c'
import { ViewAssistantSdk } from "view-sdk";
const assistant = new ViewAssistantSdk(
"http://localhost:8000/", //endpoint
"<tenant-guid>", //tenant Id
"default" //access token
);
const readAssistant = async () => {
try {
const response = await api.AssistantConfig.read(
"<config-guid>"
);
console.log(response);
} catch (err) {
console.log("Error reading assistant:", err);
}
};
readAssistant();
import view_sdk
from view_sdk import assistant
from view_sdk.sdk_configuration import Service
sdk = view_sdk.configure(
access_key="default",
base_url="localhost",
tenant_guid="tenant-guid",
service_ports={Service.ASSISTANT: 8000},
)
def readConfig():
result = assistant.Config.retrieve("<config-guid>")
print(result)
readConfig()
using View.Sdk;
using View.Sdk.Assistant;
ViewAssistantSdk sdk = new ViewAssistantSdk(Guid.Parse("<tenant-guid>"),"default", "http://localhost:8000/");
AssistantConfiguration response = await sdk.Config.Retrieve(Guid.Parse("<config-guid>"));
Read all
To read al configs, call GET /v1.0/tenants/[tenant-guid]/assistant/configs
curl --location 'http://view.homedns.org:8000/v1.0/tenants/00000000-0000-0000-0000-000000000000/assistant/configs'
import { ViewAssistantSdk } from "view-sdk";
const assistant = new ViewAssistantSdk(
"http://localhost:8000/", //endpoint
"<tenant-guid>", //tenant Id
"default" //access token
);
const readAllConfigs = async () => {
try {
const response = await api.AssistantConfig.readAll();
console.log(response);
} catch (err) {
console.log("Error reading all configs:", err);
}
};
readAllConfigs();
import view_sdk
from view_sdk import assistant
from view_sdk.sdk_configuration import Service
sdk = view_sdk.configure(
access_key="default",
base_url="localhost",
tenant_guid="tenant-guid",
service_ports={Service.ASSISTANT: 8000},
)
def readAllConfigs():
result = assistant.Config.retrieve_all()
print(result)
readAllConfigs()
using View.Sdk;
using View.Sdk.Assistant;
ViewAssistantSdk sdk = new ViewAssistantSdk(Guid.Parse("<tenant-guid>"),"default", "http://localhost:8000/");
AssistantConfigurationResponse response = await sdk.Config.RetrieveMany();
Response
{
"AssistantConfigs": [
{
"GUID": "8d48511f-dc01-4b11-9eb7-f5bad86c482c",
"Name": "botox",
"Description": null,
"CreatedUTC": "2025-04-11T19:05:56.713900Z"
}
}
Update
To update a config by GUID, call PUT /v1.0/tenants/[tenant-guid]/assistant/configs/[config-guid]
curl --location --request PUT 'http://view.homedns.org:8000/v1.0/tenants/00000000-0000-0000-0000-000000000000/assistant/configs/00000000-0000-0000-0000-000000000000' \
--header 'Content-Type: application/json' \
--data '{
"Name": "Updated RAG Assistant",
"Description": "uses qwen2.5:7b - ollama",
"SystemPrompt": "Use the provided context to answer questions.",
"EmbeddingModel": "sentence-transformers/all-MiniLM-L6-v2",
"MaxResults": 10,
"VectorDatabaseName": "vectordb",
"VectorDatabaseTable": "minilm",
"VectorDatabaseHostname": "pgvector",
"VectorDatabasePort": 5432,
"VectorDatabaseUser": "postgres",
"VectorDatabasePassword": "password",
"GenerationProvider": "ollama",
"GenerationApiKey": "",
"GenerationModel": "qwen2.5:7b",
"HuggingFaceApiKey": "",
"Temperature": 0.1,
"TopP": 0.95,
"MaxTokens": 500,
"OllamaHostname": "ollama",
"OllamaPort": 11434,
"ContextSort": true,
"SortByMaxSimilarity": true,
"ContextScope": 0,
"Rerank": true,
"RerankModel": "cross-encoder/ms-marco-MiniLM-L-6-v2",
"RerankTopK": 3
}'
import { ViewAssistantSdk } from "view-sdk";
const assistant = new ViewAssistantSdk(
"http://localhost:8000/", //endpoint
"<tenant-guid>", //tenant Id
"default" //access token
);
const updateAssistantConfig = async () => {
try {
const response = await api.AssistantConfig.update({
GUID: "<config-guid>",
Name: "Basic RAG Assistant [ASHISH_UPDATED]",
Description: "uses qwen2.5:7b - ollama",
SystemPrompt: "Use the provided context to answer questions.",
EmbeddingModel: "sentence-transformers/all-MiniLM-L6-v2",
MaxResults: 10,
VectorDatabaseName: "vectordb",
VectorDatabaseTable: "minilm",
VectorDatabaseHostname: "pgvector",
VectorDatabasePort: 5432,
VectorDatabaseUser: "postgres",
VectorDatabasePassword: "password",
GenerationProvider: "ollama",
GenerationModel: "qwen2.5:7b",
GenerationApiKey: "",
Temperature: 0.1,
TopP: 0.95,
MaxTokens: 500,
OllamaHostname: "ollama",
OllamaPort: 11434,
OllamaContextLength: null,
ContextSort: true,
SortBySimilarity: true,
ContextScope: 0,
Rerank: true,
RerankModel: "cross-encoder/ms-marco-MiniLM-L-6-v2",
RerankTopK: 3,
RerankLevel: "Chunk",
TimestampEnabled: true,
TimestampFormat: "Date",
TimestampTimezone: "UTC",
UseCitations: false,
CreatedUTC: "2025-04-22T09:57:18.942204",
LastModifiedUTC: "2025-04-22T09:57:18.942204",
ChatOnly: false,
});
console.log(response);
} catch (err) {
console.log("Error updating assistant config:", err);
}
};
updateAssistantConfig();
import view_sdk
from view_sdk import assistant
from view_sdk.sdk_configuration import Service
sdk = view_sdk.configure(
access_key="default",
base_url="localhost",
tenant_guid="tenant-guid",
service_ports={Service.ASSISTANT: 8000},
)
def updateConfig():
updateConfig = assistant.Config.update("<config-guid>",
Name= "Chat Only Blackbeard Assistant [Updated]",
Description= "uses qwen2.5:7b - ollama",
SystemPrompt= "You are Edward Teach (c. 1680 – 22 November 1718), better known as the pirate Blackbeard. Talk like a pirate and only answer questions with period correct answers.",
GenerationProvider= "ollama",
GenerationApiKey= "",
GenerationModel= "qwen2.5:7b",
Temperature= 0.1,
TopP= 0.95,
MaxTokens= 500,
OllamaHostname= "ollama",
OllamaPort= 11434,
ChatOnly= True
)
print(updateConfig)
updateConfig()
using View.Sdk;
using View.Sdk.Assistant;
ViewAssistantSdk sdk = new ViewAssistantSdk(Guid.Parse("<tenant-guid>"),"default", "http://localhost:8000/");
AssistantConfig config = new AssistantConfig
{
Name = "Updated RAG Assistant",
Description = "uses qwen2.5:7b - ollama",
SystemPrompt = "Use the provided context to answer questions.",
EmbeddingModel = "sentence-transformers/all-MiniLM-L6-v2",
MaxResults = 10,
VectorDatabaseName = "vectordb",
VectorDatabaseTable = "minilm",
VectorDatabaseHostname = "pgvector",
VectorDatabasePort = 5432,
VectorDatabaseUser = "postgres",
VectorDatabasePassword = "password",
GenerationProvider = "ollama",
GenerationApiKey = "",
GenerationModel = "qwen2.5:7b",
HuggingFaceApiKey = "",
Temperature = 0.1,
TopP = 0.95,
MaxTokens = 500,
OllamaHostname = "ollama",
OllamaPort = 11434,
ContextSort = true,
SortByMaxSimilarity = true,
ContextScope = 0,
Rerank = true,
RerankModel = "cross-encoder/ms-marco-MiniLM-L-6-v2",
RerankTopK = 3
};
AssistantConfiguration response = await sdk.Config.Update(config);
Delete
To read config by GUID, call DELETE /v1.0/tenants/[tenant-guid]/assistant/configs/[config-guid]
curl --location 'http://view.homedns.org:8000/v1.0/tenants/00000000-0000-0000-0000-000000000000/assistant/configs/8d48511f-dc01-4b11-9eb7-f5bad86c482c'
import { ViewAssistantSdk } from "view-sdk";
const assistant = new ViewAssistantSdk(
"http://localhost:8000/", //endpoint
"<tenant-guid>", //tenant Id
"default" //access token
);
const readAssistant = async () => {
try {
const response = await api.AssistantConfig.delete(
"<config-guid>"
);
console.log(response);
} catch (err) {
console.log("Error reading assistant:", err);
}
};
readAssistant();
import view_sdk
from view_sdk import assistant
from view_sdk.sdk_configuration import Service
sdk = view_sdk.configure(
access_key="default",
base_url="localhost",
tenant_guid="tenant-guid",
service_ports={Service.ASSISTANT: 8000},
)
def deleteConfig():
result = assistant.Config.delete("<config-guid>")
print(result)
deleteConfig()
using View.Sdk;
using View.Sdk.Assistant;
ViewAssistantSdk sdk = new ViewAssistantSdk(Guid.Parse("<tenant-guid>"),"default", "http://localhost:8000/");
bool deleted = await sdk.Config.Delete(Guid.Parse("<config-guid>"));
Check existence
To check existence of a config by GUID, call HEAD /v1.0/tenants/[tenant-guid]/assistant/configs/[config-guid]
curl --location --head 'http://view.homedns.org:8000/v1.0/tenants/00000000-0000-0000-0000-000000000000/assistant/configs/00000000-0000-0000-0000-000000000000'
import { ViewAssistantSdk } from "view-sdk";
const assistant = new ViewAssistantSdk(
"http://localhost:8000/", //endpoint
"<tenant-guid>", //tenant Id
"default" //access token
);
const existingAssistant = async () => {
try {
const response = await api.AssistantConfig.exists(
"<config-guid>"
);
console.log(response);
} catch (err) {
console.log("Error retrieving assistant:", err);
}
};
existingAssistant();
import view_sdk
from view_sdk import assistant
from view_sdk.sdk_configuration import Service
sdk = view_sdk.configure(
access_key="default",
base_url="localhost",
tenant_guid="tenant-guid",
service_ports={Service.ASSISTANT: 8000},
)
def existsConfig():
result = assistant.Config.exists("<config-guid>")
print(result)
existsConfig()
using View.Sdk;
using View.Sdk.Assistant;
ViewAssistantSdk sdk = new ViewAssistantSdk(Guid.Parse("<tenant-guid>"),"default", "http://localhost:8000/");
bool exists = await sdk.Config.Exists(Guid.Parse("<config-guid>"));
Best Practices
When managing assistant configurations in the View Assistant platform, consider the following recommendations for optimal RAG configuration, chat-only assistants, and configuration management:
- Configuration Strategy: Implement effective configuration strategies for different assistant types (RAG vs chat-only) based on your use cases and requirements
- RAG Optimization: Configure appropriate vector database settings, embedding models, and retrieval parameters for optimal RAG performance
- Generation Settings: Optimize generation parameters (temperature, top-p, max tokens) based on your content types and quality requirements
- System Prompts: Design effective system prompts that define assistant behavior and response quality for your specific use cases
- Configuration Management: Implement proper configuration lifecycle management including creation, updates, testing, and cleanup
Next Steps
After successfully managing assistant configurations, you can:
- Chat Threads: Create and manage chat threads using your assistant configurations for persistent conversations
- Model Management: Manage and optimize language models for different assistant scenarios and performance requirements
- RAG Implementation: Implement advanced RAG capabilities using your configured assistants for enhanced content understanding
- Assistant Integration: Integrate assistant configurations with your applications for enhanced user experiences and AI-powered interactions
- Configuration Analytics: Develop configuration analytics and performance monitoring for improved assistant effectiveness and optimization