AimlapiEmbeddings
This will help you get started with AI/ML API embedding models using LangChain. For detailed documentation on AimlapiEmbeddings
features and configuration options, please refer to the API reference.
Overviewโ
Integration detailsโ
Provider | Package |
---|---|
AI/ML API | langchain-aimlapi |
Setupโ
To access AI/ML API embedding models you'll need to create an account, get an API key, and install the langchain-aimlapi
integration package.
Credentialsโ
Head to https://aimlapi.com/app/ to sign up and generate an API key. Once you've done this, set the AIMLAPI_API_KEY
environment variable:
import getpass
import os
if not os.getenv("AIMLAPI_API_KEY"):
os.environ["AIMLAPI_API_KEY"] = getpass.getpass("Enter your AI/ML API key: ")
To enable automated tracing of your model calls, set your LangSmith API key:
# os.environ["LANGSMITH_TRACING"] = "true"
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
Installationโ
The LangChain AI/ML API integration lives in the langchain-aimlapi
package:
%pip install -qU langchain-aimlapi
Note: you may need to restart the kernel to use updated packages.
Instantiationโ
Now we can instantiate our embeddings model and perform embedding operations:
from langchain_aimlapi import AimlapiEmbeddings
embeddings = AimlapiEmbeddings(
model="text-embedding-ada-002",
)
Indexing and Retrievalโ
Embedding models are often used in retrieval-augmented generation (RAG) flows. Below is how to index and retrieve data using the embeddings
object we initialized above with InMemoryVectorStore
.
from langchain_core.vectorstores import InMemoryVectorStore
text = "LangChain is the framework for building context-aware reasoning applications"
vectorstore = InMemoryVectorStore.from_texts(
[text],
embedding=embeddings,
)
retriever = vectorstore.as_retriever()
retrieved_documents = retriever.invoke("What is LangChain?")
retrieved_documents[0].page_content
'LangChain is the framework for building context-aware reasoning applications'
Direct Usageโ
You can directly call embed_query
and embed_documents
for custom embedding scenarios.
Embed single text:โ
single_vector = embeddings.embed_query(text)
print(str(single_vector)[:100])
[-0.0011368310078978539, 0.00714730704203248, -0.014703838154673576, -0.034064359962940216, 0.011239
Embed multiple texts:โ
text2 = (
"LangGraph is a library for building stateful, multi-actor applications with LLMs"
)
two_vectors = embeddings.embed_documents([text, text2])
for vector in two_vectors:
print(str(vector)[:100])
[-0.0011398226488381624, 0.007080476265400648, -0.014682820066809654, -0.03407655283808708, 0.011276
[-0.005510928109288216, 0.016650190576910973, -0.011078780516982079, -0.03116573952138424, -0.003735
API Referenceโ
For detailed documentation on AimlapiEmbeddings
features and configuration options, please refer to the API reference.
Relatedโ
- Embedding model conceptual guide
- Embedding model how-to guides