Platform Overview
Core Features
Model Serving Frameworks: Support for TensorFlow Serving, TorchServe, and custom FastAPI.
API Communication: REST and gRPC for seamless interaction.
Scalable Infrastructure: Auto-scaling to handle varying workloads.
Edge Deployment: Reduced latency with geographically distributed nodes.
Monitoring Tools: Real-time insights into model performance.
How It Works
Upload Models: Use the InfrAI CLI or dashboard to upload your trained models.
Select Deployment Options: Choose framework, container environment, and API protocol.
Launch Deployment: Deploy on decentralized nodes via one-click options.
Monitor Performance: Access dashboards for latency, throughput, and more.
Last updated