OS-nano

Powered by NVIDIA AI, OS-nano fuses visual and voice perception for environment understanding and enables autonomous task execution for a wide range of embodied robot applications.

OS-nano.png

Multi-scenario Fusion Navigation Algorithm

Fusion of multi-source sensing data and diverse algorithms supports navigation development and application in complex environments, laying a solid technical foundation for autonomous driving, robotics and enhanced navigation applications.

5.jpg

Multi-stage Practice and Extended Functions

A full-process workflow covering hardware, algorithms, system integration, installation, and innovative research — designed to drive practical innovation and solve complex technical problems.

1.png
  • SLAM & AI Practice :Localization, Planning, Navigation & AI Control Synergy
  • Deep Model Training :Visual Annotation, Simulation & Pre-trained Model Training
  • AI Large Model Dev : AI Algorithm Expansion for Diverse Scenarios
  • Hardware Assembly & Debugging :Chassis Hardware & Electrical Commissioning
  • Motion Control Calibration :Chassis CAN Bus & Control Calibration
  • Chassis-Nav Joint Debugging :Multi-sensor Calibration & Navigation Tuning

Powered by modular hardware, multimodal AI large-model deployment, and ROS ecosystem integration, OS-nano delivers an all-in-one robotics platform featuring AI algorithms, embodied interaction, autonomous navigation, and vehicle–road collaboration.

2.jpg
  • 2.jpg

    AI Discipline

    Lower Deployment Threshold for Multimodal AI Large Models, Enhance On-Device Training Efficiency, and Provide Comprehensive Support from Model Implementation to Application Validation

  • 3.jpg

    Embodied AI Discipline

    Hardware Entity + Multimodal Perception + Real-Time Action Control; Key support for agent-environment interaction intelligence development

  • 1.jpg

    Mobile Robotics Discipline

    Navigation Module & Modular Chassis Building. Full-cycle support from basic learning to in-depth innovation for autonomous navigation, scenario adaptation and function expansion.

  • 0df825f99c5ff77f410c212d4aa9e645.png

    Intelligent Connected Discipline

    Enable the implementation of vision-based autonomous driving, advance the vehicle-environment interaction capability, and provide end-to-end full technical development solutions.

No record
No record

Comprehensive Application Development

  • Voice Control

    Voice Control

  • Lane Recognition

    Lane Recognition

  • Swarm Control

    Swarm Control

  • Visual Navigation

    Visual Navigation

  • Voice Control

    Voice Control

  • Lane Recognition

    Lane Recognition

  • Swarm Control

    Swarm Control

  • Visual Navigation

    Visual Navigation

Multi-industry Applications

  • Development

    Development

  • Training

    Training

  • Education

    Education

  • Research

    Research

Support diverse modular applications and work with you to unlock new possibilities across industries.

  • Chassis Types:Ackermann Chassis, 4WT/4WD Chassis, Tracked Chassis, Differential Chassis
  • Embedded Computer:Jetson Orin nano (8G)
  • Storage:128G
  • Computing Power:40 TOPS
  • Power:10 W
  • LiDAR:2D LiDAR(16m Measuring Radius)/3D LiDAR(100m Measuring Radius)
  • Depth Camera:Astra pro plus Depth Camera
  • Sound Pick-up Unit:six-channel microphone, 10m reception range, built-in neural network processor, custom voice lexicon
  • System Configuration:ubuntu 22.04
  • Nav System Version:ROS 2
  • Operating Environment:Indoor / outdoor(non-rainy and non-snowy)
  • Hardware Functions:Hardware Installation & Commissioning, CAN Communication Analysis & Application, Chassis Kinematics Analysis & Application, Battery BMS Analysis & Application
  • Auto-Nav Install, Debug & Test Function:Auto-Nav Hardware Install, Debug & Test, Auto-Nav Software Architecture Analysis & Application, Sensor Fusion Data Analysis and Application, Chassis Control Fusion Analysis and Application
  • Nav Functions:Dynamic Obstacle Avoidance, Point-to-Point Navigation, Multi-Point Navigation, TEB and DWA Path Planning, LiDAR Angle Masking, LiDAR Mapping Navigation
  • Mapping Functions:Rtab Visual-Only/Rtab Visual-LiDAR /Gmapping/Hector/Karto/Cartographer Mapping, RRT/Frontier/Explore_Lite Autonomous Mapping
  • Robot Formation Function: Leader-Follower Algorithm
  • HMI Function :Sound Source Localization, Voice Summon/Control/Navigation/Broadcast/Interaction, LiDAR Following
  • Vision Function :OpenCV Application and Tutorial, Webcam Monitor, Depth Vision Follow, KCF Follow, AR Tag Recognition, RGB Line Patrolling, Human Recognition & Following, 3D Vision Mapping, 3D Vision Nav
  • Deep Learning Function:YOLO Object/Traffic Sign Recognition, Deep Learning Model Training, Gesture Contro, Sandbox Map Auto-Driving
  • AI Large Model Deployment:Large Model Application/Semantic Understanding/Emotion Perception/Voice Interaction
  • AI Vision Large Model:Vision Large Model Target Recognition/Scene Understanding/Text Recognition
  • AI Large Model Robot:Multimodal Large Model Voice Control/Auto Line Patrolling/Color Tracking, Embodied Intelligence Real-time Monitor/Vision Tracking/Smart Butler, Multimodal Information Fusion Overview
  • Configuration:Nav Module + Chassis / Nav Module + Chassis + Training & Assessment Plan
  • Config Desc:Components and functions support modular upgrades; specifications are subject to the actual product configuration.
  • Tech Docs:API Interface, Open-Source Code, Chassis CAN Protocol, Chassis STP Drawing, Driver ROS Package
  • Services:One-to-One Development Tech Support
No record

Technical Support

    No record
新人物长图英文版.jpg
No record