Skip to main content
This section will provide information on how to use image processing models that you have prepared with your own dataset or acquired as pre-trained models.
sudo su
source /opt/t3-edgeai-env
These commands allow you to switch to the root user and load the environment variables required for Edge AI.
If the environment variables file does not exist, you can install the relevant package from the Edge AI Installation section.

Edge AI GStreamer Apps

This project is a collection of open-source reference applications provided by Texas Instruments that can be used to rapidly develop AI applications on devices like the T3 Gemstone O1. It primarily operates on a GStreamer-based architecture and offers ready-made solutions for performing image processing, object detection, streaming, and other AI workflows on Texas Instruments processors and SoC hardware. Once you have completed the Edge AI installation in the Installation section, you can access this project from the /opt/edgeai-gst-apps directory. The configs folder contains examples for multiple applications. It allows you to run the project in both Python and C++ languages. For example, the configuration found in configs/image_classification.yaml is as follows:
title: "Image Classification"
log_level: 2
inputs:
    input0:
        source: /dev/video-usb-cam0
        format: jpeg
        width: 1280
        height: 720
        framerate: 30
    input1:
        source: /opt/edgeai-test-data/videos/video0_1280_768.h264
        format: h264
        width: 1280
        height: 768
        framerate: 30
        loop: True
    input2:
        source: /opt/edgeai-test-data/images/%04d.jpg
        width: 1280
        height: 720
        index: 0
        framerate: 1
        loop: True
models:
    model0:
        model_path: /opt/model_zoo/TVM-CL-3090-mobileNetV2-tv
        topN: 5
    model1:
        model_path: /opt/model_zoo/TFL-CL-0000-mobileNetV1-mlperf
        topN: 5
    model2:
        model_path: /opt/model_zoo/ONR-CL-6360-regNetx-200mf
        topN: 5
outputs:
    output0:
        sink: kmssink
        width: 1920
        height: 1080
        overlay-perf-type: graph
    output1:
        sink: /opt/edgeai-test-data/output/output_video.mkv
        width: 1920
        height: 1080
    output2:
        sink: /opt/edgeai-test-data/output/output_image_%04d.jpg
        width: 1920
        height: 1080
    output3:
        sink: remote
        width: 1920
        height: 1080
        port: 8081
        host: 127.0.0.1
        encoding: jpeg
        overlay-perf-type: graph

flows:
    flow0: [input2,model1,output0,[320,150,1280,720]]
You can customize the YAML file above according to your own requirements.
TITLE
default:"Image Classification"
required
The title of the application. Used as a reference in log outputs and the interface.
LOG_LEVEL
default:"2"
required
Determines the log detail level. 0: minimal, 5: debug
INPUTS
required
Input data sources to be processed.
INPUT_SOURCE
default:"<device-path>,<file-path>"
required
The path to the input source.
INPUT_FORMAT
default:"jpeg,h264"
required
The format of the input data. For example: jpeg, h264
INPUT_WIDTH
default:"1280"
required
Input image width (pixels)
INPUT_HEIGHT
default:"720"
required
Input image height (pixels)
INPUT_FRAMERATE
default:"30"
required
Input frame rate (FPS)
INPUT_LOOP
default:"True,False"
Whether to repeat in a loop.
MODELS
required
Models to be used
MODEL_PATH
default:"<file-path>"
required
The file path of the model.
MODEL_TOPN
default:"5"
required
The number of top-N predictions to take from the model.
OUTPUTS
required
Output destinations
OUTPUT_SINK
default:"remote,kmssink,<file-path>"
required
Output path
OUTPUT_WIDTH
default:"1920"
required
Streaming resolution width (pixels)
OUTPUT_HEIGHT
default:"1080"
required
Streaming resolution height (pixels)
OUTPUT_HOST
default:"127.0.0.1"
Host IP for streaming
OUTPUT_PORT
default:"8081"
Streaming port number
OUTPUT_ENCODING
default:"jpeg,h264"
required
Streaming image format
OUTPUT_OVERLAY_PERF_TYPE
default:"graph"
Performance graph overlay type
FLOWS
required
Defines the data flow.
FLOW
default:"[input2, model1, output0, [320,150,1280,720]]"
required
Defines the data flow: input → model → output. Optionally, you can specify an ROI (x, y, width, height).
Before starting image processing steps, ensure that you are logged in as root and have loaded the environment variables!
cd /opt/edgeai-gst-apps/apps_python && ./app_edgeai.py ../configs/image_classification.yaml
To terminate the image processing, press CTRL + C once. If you have not modified the output file in the YAML file, you can access the output at /opt/edgeai-test-data/output/output_video.mkv.

Integrating into Your Existing Python Code

By default, TensorFlow Lite runs models on the device’s CPU (Central Processing Unit). A “Delegate” is a mechanism that “delegates” some or all of the computations in your TFLite model from the CPU to more specialized hardware. To use these specialized hardware accelerators (such as the image processing accelerators on the T3 Gemstone O1 Development Board), you need to load the required shared library file (.so, .dll, etc.).
Before starting image processing steps, ensure that you are logged in as root and have loaded the environment variables!
Python
#!/usr/bin/python3

import numpy as np
from tflite_runtime.interpreter import Interpreter
from tflite_runtime.interpreter import load_delegate

# 1. Edge AI delegate kütüphanesini yükle
try:
    edgetpu_delegate = load_delegate('/usr/lib/libtidl_tfl_delegate.so')
    delegates_list = [edgetpu_delegate]
    print("Edge TPU bulundu ve yüklendi.")
except (ValueError, OSError):
    delegates_list = [] # TPU bulunamazsa, normal CPU'da çalıştır
    print("Edge TPU bulunamadı, CPU kullanılacak.")

# 2. Modeli yükle ve Interpreter'ı başlatırken delegate'i kullan
interpreter = Interpreter(
    model_path="modelinizin_adi.tflite",
    experimental_delegates=delegates_list
)

# 3. Kalan standart TFLite adımları...
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# ... (Giriş verisini ayarla, modeli çalıştır, vb.)