Reasoning Flows
Reasoning Flows Overview
Reasoning Flows is an integrated development platform designed to enable teams to build, deploy, and manage data-driven solutions with high efficiency. It supports a wide range of use cases—from machine learning models and ETL pipelines to custom APIs, web applications, and automated data workflows.
Built with a flexible architecture, Reasoning Flows combines GUI-based tools with open coding environments, providing both low-code and pro-code capabilities. It leverages Docker containerization for scalable compute, supports scheduling and orchestration of data tasks, and facilitates collaboration through reusable assets and project cloning features.
Capabilities and Use Cases
Reasoning Flows enables the development of solutions that span across:
- Machine Learning model training and prediction pipelines
- Data extraction, transformation, and loading (ETL) from various sources
- Custom application and API development
- Text intelligence using embeddings, classification, and segmentation
- Automated notification systems (e.g., email alerts)
- Real-time or scheduled execution of complex workflows
Projects in Reasoning Flows
Every Reasoning Flow begins within a Project — a container that organizes related objects, datasets, and executions.
Projects help teams structure their workflows logically (for example, separating DataPipelines, APIs, or AI models) and make large environments easier to manage.
Project Type
Defines how a flow runs or is triggered:
- Batch Process – Runs manually or on a schedule. Ideal for ETL pipelines, model training, or reporting.
- Interactive Process – Runs on demand via a user interface that requests parameters.
- API Calls – Exposed as an endpoint and executed through REST API calls.
Tip: Most operational data and ML pipelines are configured as Batch Processes.
Project Category
Used for labeling and organizing projects within the platform.
Categories don’t affect workflow execution — they’re purely organizational for navigation and filtering.
Common examples include:
- Data Pipeline – For ingestion and extraction workflows.
- Data Preparation & Transform – For data cleaning, formatting, or preprocessing.
- AI/ML Workflow – For training, deploying, or predicting with ML models.
- Processes Workflow – For orchestrating multiple logic layers or dependencies.
- Big Data Process – For distributed or parallel data tasks.
- Notifications – For automated alert or messaging workflows.
- Sequential Code Process – For chained execution or custom scripts.
- API – For services exposed to external systems.
Note: Project categories serve as logical groupings. They don’t impact runtime behavior or logic flow but simplify workspace management.
Project Type and Category Alignment
| Project Type | Typical Categories | Description / Common Use Cases |
|---|---|---|
| Batch Process (Runs manually or by scheduler) | - Data Pipeline - Data Preparation & Transform - AI/ML Workflow - Big Data Process - Sequential Code Process - Notifications | Most common type. Used for scheduled or on-demand data workflows, ETL jobs, model training, or automated alerts. |
| Interactive Process (Runs manually from a UI with user inputs) | - AI/ML Workflow - Data Preparation & Transform - Processes Workflow - Data Apps | Best for flows requiring user interaction — e.g., parameterized analytics runs, experiment triggers, or UI-based tools. |
| API Calls (Runs via REST API endpoints) | - API - AI/ML Workflow - Notifications - Processes Workflow | Used to expose flows as services or endpoints — e.g., prediction APIs, webhook-triggered tasks, or on-demand automations. |
Examples
| Scenario | Project Type | Category |
|---|---|---|
| Nightly ETL to refresh a dashboard | Batch Process | Data Pipeline |
| Cleaning and preparing a dataset before training | Batch Process | Data Preparation & Transform |
| Training an AutoML model with user-defined parameters | Interactive Process | AI/ML Workflow |
| Internal tool that runs custom visualizations based on user inputs | Interactive Process | Data Apps |
| Real-time prediction endpoint for an external system | API Calls | API |
| Slack or email alert when a model completes training | Batch Process or API Calls | Notifications |
Tip: Any combination of project type and category is technically valid — these alignments simply reflect common best practices for keeping environments structured and intuitive.
Object Categories in Reasoning Flows
Extract and Load
This category enables automated data extraction from registered data sources (e.g., MySQL-compatible databases) and supports loading into tables managed within the Reasoning Flows platform. These objects may utilize direct table-to-table transfers or custom SQL queries to define the scope of data retrieval.
Key Objects:
- AP DataPipe Engine - MySQL
- AP DataPipe Engine - File
- Python 3.12 DataPipe Engine
Note: The
AP DataPipe Engineprovides a GUI-based configuration form for mapping source and destination tables, enabling rapid setup for common ETL tasks.
Transform and Prepare
These objects focus on refining data before it is used for analytics or modeling. They support data cleaning, format standardization, index generation, data type conversion, date processing, and custom SQL transformations. GUI-based tools allow for quick configuration, while SQL objects provide full scripting control.
Key Objects:
- AP Prepared Table
- AP Transform String to Binary
- AP Transform String to Numeric
- AP Transform Dates to Numeric
- AP SQL Code Execution
- AP Model Render
- SingularAI Text Splitter
Prepared Table: Converts raw database tables into datasets that support field-level transformation and analysis.
SQL Code Execution: Executes custom SQL logic as a standalone process within the pipeline.
AI and Machine Learning
This category offers both AutoML tools and custom development environments for machine learning. AutoML tools include GUI-driven workflows for training, deploying, and predicting with models. Dedicated GPU environments are available for high-performance training workloads.
Key Objects:
- AP AutoML Engine
- Singular-AI Text Embeddings
- AP Generative AI Workflow
- AP AutoML GPU Engine
AutoML objects: Focused on low-code model development with visual workflows.
GPU engines: Designed for larger datasets and more complex model training, leveraging containerized GPU acceleration.
High Performance Computing
These are open development environments that allow teams to write and execute custom code using supported languages. Ideal for advanced data processing, ML model development, API services, and custom application logic.
Key Objects:
- PHP 7.4 Application
- PHP 8.2 Application
- Python 3.8 Advanced ML Application
- Python 3.8 Advanced ML & Plotly
- Python 3 FastAPI
- Python 3.9 Google Cloud Speech
Notebooks
The Reasoning Flows Notebooks object allows for the development of interactive Python notebooks directly within Reasoning Flows. This object supports data exploration, experimentation, and the documentation of logic and results in a single interface.
Notification Engine
Allows configuration and execution of custom notifications using email services. Requires a Mailgun API key for operation. These objects can be triggered within workflows to send status updates, alerts, or summaries.
Key Object:
- AP Notification Engine
Web-Hook Sender
This object type enables integration with external systems via webhook calls. Users can define webhook URLs and payloads to trigger downstream services based on events occurring within Reasoning Flows.
Key Object:
- AP Web-Hook Sender
Development Interface
Each object within Reasoning Flows features a robust development interface that includes:
- Global Files: Shared code or libraries accessible by multiple objects in the same project.
- Data Repository Access: Direct integration with the project’s internal data tables and schemas.
- Execution and Scheduling: Support for on-demand or scheduled runs of objects and workflows.
- Project and Object Cloning: Rapid duplication of entire projects or individual objects for reuse.
- Dynamic Parameter Configuration: Runtime parameter injection to support flexible and scalable executions.
Deployment and Compute Infrastructure
Reasoning Flows runs on a containerized infrastructure based on Docker. It offers two types of compute resource configurations:
- Shared Container Resources: Suitable for general workloads; cost-effective and managed across tenants.
- Dedicated Container Resources: Reserved compute environments offering guaranteed performance, designed for enterprise-scale needs.
Summary
Reasoning Flows is a unified environment for data development, supporting both GUI-based and code-driven workflows. It empowers teams to extract insights from data, build intelligent applications, and deploy solutions across the organization. Whether handling structured transformations, deploying AI models, or integrating with external APIs, Reasoning Flows provides the tools and infrastructure to deliver robust, scalable outcomes.
Updated 19 days ago
