Prepare Model
To prepare a model version to be served in Model, please follow these steps on your local system:
- Install the latest Python Instill SDK:
pip install instill-sdk
- Create an empty folder for your custom model and run the
init
command to generate required files, with sample codes and comments that describe their function
instill init
- Modify the model config file (
instill.yaml
) to describe your model's dependencies. - Modify the
model.py
file which defines the model class that will be decorated into a servable model with the Python Instill SDK. - Organize the repository files into a valid model layout.
Model Configuration
Model configuration is handled within the instill.yaml
file that accompanies the model. It describes the models necessary dependency information and is crucial for reproducibility, sharing and discoverability.
In the instill.yaml
file, you are can specify the following details:
- build:
- gpu:
- Required:
boolean
- Description: Specifies if the model needs GPU support.
- Required:
- python_version:
- Required:
string
- Supported Versions:
3.11
- Required:
- cuda_version:
- Optional:
string
- Supported Versions:
11.5
,11.6
,11.7
,11.8
,12.1
- Default: Defaults to
11.8
if not specified or empty.
- Optional:
- python_packages:
- Optional:
list
- Description: Lists packages to be installed with
pip
.
- Optional:
- system_packages:
- Optional:
list
- Description: Lists packages to be installed from the
apt
package manager. The model image is based onUbuntu 22.04 LTS
.
- Optional:
- gpu:
Below is an example instill.yaml
for the TinyLlama model:
build:
gpu: true
python_version: "3.11" # support only 3.11
cuda_version: "12.1"
python_packages:
- torch==2.2.1
- transformers==4.36.2
- accelerate==0.25.0
Model Layout
To deploy a model in Model, we suggest you prepare the model files similar to the following layout:
.
├── instill.yaml
├── model.py
├── <additional_modules>
└── <weights>
├── <weight_file_1>
├── <weight_file_2>
├── ...
└── <weight_file_n>
The above layout displays a typical model in Model consisting of
instill.yaml
- model config file that describe the model dependenciesmodel.py
- a decorated model class that contains custom inference logic<additional_modules>
- a directory that holds supporting python modules if necessary<weights>
- a directory that holds the weight files if necessary
You can name the <weights>
and <additional_modules>
folders freely provided that they can be properly loaded and used by the model.py
file.
Prepare Model Code
To implement a custom model that can be deployed and served in Model, you only need to construct a simple model class within the model.py
file.
The custom model class is required to contain two methods:
__init__
: This is where the model loading process is defined, allowing the weights to be stored in memory and yielding faster auto-scaling behavior.__call__
: This is the inference request entrypoint, and is where you implement your model inference logic.
Also you will need to determine which AI task spec your model will be following. By using the
parse_task_***_to_***_input
, it will convert the request input into easy-to-use dataclassconstruct_task_xxx_output
, it will convert various standard type output into response format
Below is a simple example implementation of the TinyLlama model with explanations:
# import necessary packages
import time
import torch
from transformers import pipeline
# import SDK helper functions which
# parse input request into easily workable dataclass
# and convert output data into response class
from instill.helpers import (
parse_task_chat_to_chat_input,
construct_task_chat_output,
)
# ray_config package hosts the decorators and deployment object for the model class
from instill.helpers.ray_config import instill_deployment, InstillDeployable
# use instill_deployment decorator to convert the model class to a servable model
@instill_deployment
class TinyLlama:
# within the __init__ function, setup the model instance with the desired framework, in this
# case it is a transformers pipeline
def __init__(self):
self.pipeline = pipeline(
"text-generation",
model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
torch_dtype=torch.bfloat16,
)
# __call__ handles the trigger request from Model
async def __call__(self, request):
# use helper package to parse the request and get the corresponding input
# for -chat task
conversation_inputs = await parse_task_chat_to_chat_input(request=request)
# construct necessary Chat task output variables
finish_reasons = []
indexes = []
created = []
messages = []
for i, inp in enumerate(conversation_inputs):
prompt = self.pipeline.tokenizer.apply_chat_template(
inp.messages,
tokenize=False,
add_generation_prompt=True,
)
# inference
sequences = self.pipeline(
prompt,
max_new_tokens=inp.max_tokens,
do_sample=True,
temperature=inp.temperature,
top_p=inp.top_p,
)
output = (
sequences[0]["generated_text"]
.split("<|assistant|>\n")[-1]
.strip()
)
messages.append([{"content": output, "role": "assistant"}])
finish_reasons.append(["length"])
indexes.append([i])
created.append([int(time.time())])
return construct_task_chat_output(
request=request,
finish_reasons=finish_reasons,
indexes=indexes,
messages=messages,
created_timestamps=created,
)
# now simply declare a global entrypoint for deployment
entrypoint = InstillDeployable(TinyLlama).get_deployment_handle()
Once all the required model files are prepared, please refer to the Build Model Image and Push Model Image pages for further information about creating and pushing your custom model image to Model in Instill Core.
Updated 9 days ago