On the sharewareonsale Page click on Download...and follow the on-screen steps
Sharing personal data
Full name
Email address
Features
Below is a thorough breakdown of every core capability built into Vovsoft AI Automator. These features are what make this tool an indispensable addition to any AI-powered productivity setup on Windows:
Local Hardware & Cloud Flexibility— Run AI workflows locally or via cloud APIs with complete freedom over your compute choice.
Ollama Integration — Connect directly to a local Ollama runtime to run open-source LLMs entirely on your own machine without an internet connection.
NVIDIA RTX / AMD / Intel GPU Support — Leverage your existing graphics hardware for fast local inference with supported local models.
CPU Inference Support — Use your processor for local model execution when GPU acceleration is not available.
OpenAI API Support — Connect to GPT models via the OpenAI API for powerful cloud-based text generation.
Anthropic Claude API Support — Route tasks to Claude models for nuanced, high-quality AI responses via cloud.
Google Gemini API Support — Integrate with Gemini to access Google’s multimodal AI capabilities through the cloud.
AI Task Scheduler— Automatically run AI prompts on a defined time interval without any manual intervention.
Custom Execution Intervals — Set the exact number of minutes between each task execution to match your workflow cadence.
Auto-Run at Windows Startup — Configure the application to launch automatically when Windows boots, ensuring your tasks begin without manual action.
System Tray Minimization — Minimize the application to the Windows system tray so it operates silently in the background without cluttering your desktop.
Automatic Output Saving — All generated AI responses are automatically saved to your specified local destination files upon completion.
Batch Prompt Scheduler— Process large volumes of prompts sequentially using an organized task queue, eliminating the need to manually trigger each request.
Sequential Batch Execution — Load hundreds of text or image inputs into the queue, and the software processes them one by one, routing each to your chosen LLM.
tasks.json Manual Editing — Directly edit thetasks.jsonconfiguration file to manage workflows at scale without relying on the graphical interface.
models.json Manual Editing — Customize model configurations via themodels.jsonfile for fine-grained control over how each model is used in your workflow.
Image Input Support — Include image-based inputs in your batch queue for multimodal AI processing where supported by the connected model.
Flexible Prompt Input— Define your AI queries from multiple external sources for maximum workflow flexibility.
File-Based Prompt Loading — Load your prompt directly from a text file stored on your local system, making it simple to reuse and manage prompts outside the application.
URL-Based Prompt Fetching — Pull prompt content from a specified web address, enabling dynamically updated or remotely managed AI query workflows.
Simplified AI Automation Architecture— Operates entirely as a standalone desktop application with no external server dependencies or agentic background overhead.
No Server Required — Unlike platforms such as n8n, the software runs directly on your existing Windows PC without any dedicated server or cloud instance to maintain.
Transparent Token Usage — Executes tasks exactly as scheduled with no hidden agentic processes consuming extra tokens in the background.
Lightweight Footprint — The installer is just 5.70 MB, making it fast to download, install, and run even on older or resource-limited machines.