Reasoning Flows
Reasoning Flows Overview
Reasoning Flows is an integrated development platform designed to enable teams to build, deploy, and manage data-driven solutions with high efficiency. It supports a wide range of use cases—from machine learning models and ETL (Extract, Transform, Load) pipelines to custom APIs, web applications, and automated data workflows.
Built with a flexible architecture, Reasoning Flows combines GUI-based tools with open coding environments, providing both low-code and pro-code capabilities. It leverages Docker containerization for scalable compute, supports scheduling and orchestration of data tasks, and facilitates collaboration through reusable assets and project cloning features.
Terminology Note: Throughout this documentation, objects prefixed with "AP" (Analytics Platform) are GUI-driven components native to Reasoning Flows. Objects prefixed with "SingularAI" are specialized text intelligence components integrated into the platform.
Capabilities and Use Cases
Reasoning Flows enables the development of solutions that span:
- Machine learning model training and prediction pipelines
- Data extraction, transformation, and loading (ETL) from various sources
- Custom application and API development
- Text intelligence using embeddings, classification, and segmentation
- Automated notification systems (e.g., email alerts)
- Real-time or scheduled execution of complex workflows
Key Concepts
Before exploring the platform's capabilities, it's helpful to understand two foundational concepts:
Projects are containers that organize related objects, datasets, and executions. Projects help teams structure their workflows logically (for example, separating data pipelines, APIs, or AI models) and make large environments easier to manage.
Objects are the individual building blocks within a project. Each object represents a discrete functional unit—such as a data extraction task, a transformation step, a machine learning model, or a notification trigger. Objects can be configured through GUI forms or custom code, and multiple objects can be chained together to form complete workflows.
Projects in Reasoning Flows
Every Reasoning Flow begins within a project.
Project Type
Defines how a flow runs or is triggered:
- Batch Process – Runs manually or on a schedule. Ideal for ETL pipelines, model training, or reporting.
- Interactive Process – Runs on demand via a user interface that requests parameters.
- API Calls – Exposed as an endpoint and executed through REST (Representational State Transfer) API calls.
Tip: Most operational data and ML pipelines are configured as Batch Processes.
Project Category
Used for labeling and organizing projects within the platform. Categories don't affect workflow execution—they're purely organizational for navigation and filtering.
Available categories include:
- Data Pipeline – For ingestion and extraction workflows.
- Data Preparation & Transform – For data cleaning, formatting, or preprocessing.
- AI/ML Workflow – For training, deploying, or predicting with ML models.
- Processes Workflow – For orchestrating multiple logic layers or dependencies across objects.
- Big Data Process – For distributed or parallel data tasks.
- Notifications – For automated alert or messaging workflows.
- Sequential Code Process – For chained execution or custom scripts.
- API – For services exposed to external systems.
- Data Apps – For interactive applications with user-facing interfaces.
Note: Project categories serve as logical groupings. They don't impact runtime behavior or logic flow but simplify workspace management.
Project Type and Category Alignment
| Project Type | Typical Categories | Description / Common Use Cases |
|---|---|---|
| Batch Process (Runs manually or by scheduler) | Data Pipeline, Data Preparation & Transform, AI/ML Workflow, Big Data Process, Sequential Code Process, Notifications | Most common type. Used for scheduled or on-demand data workflows, ETL jobs, model training, or automated alerts. |
| Interactive Process (Runs manually from a UI with user inputs) | AI/ML Workflow, Data Preparation & Transform, Processes Workflow, Data Apps | Best for flows requiring user interaction—e.g., parameterized analytics runs, experiment triggers, or UI-based tools. |
| API Calls (Runs via REST API endpoints) | API, AI/ML Workflow, Notifications, Processes Workflow | Used to expose flows as services or endpoints—e.g., prediction APIs, webhook-triggered tasks, or on-demand automations. |
Examples
| Scenario | Project Type | Category |
|---|---|---|
| Nightly ETL to refresh a dashboard | Batch Process | Data Pipeline |
| Cleaning and preparing a dataset before training | Batch Process | Data Preparation & Transform |
| Training an AutoML (Automated Machine Learning) model with user-defined parameters | Interactive Process | AI/ML Workflow |
| Internal tool that runs custom visualizations based on user inputs | Interactive Process | Data Apps |
| Real-time prediction endpoint for an external system | API Calls | API |
| Slack or email alert when a model completes training | Batch Process or API Calls | Notifications |
Tip: Any combination of project type and category is technically valid—these alignments simply reflect common best practices for keeping environments structured and intuitive.
Object Categories in Reasoning Flows
Objects are grouped into functional categories based on their purpose within a workflow. Each category contains specialized objects designed for specific tasks, from data extraction to machine learning to external integrations.
Extract and Load
This category enables automated data extraction from registered data sources and supports loading into tables managed within the Reasoning Flows platform. These objects may use direct table-to-table transfers or custom SQL queries to define the scope of data retrieval.
Supported Data Sources:
- MySQL-compatible databases (primary support)
- File-based sources (CSV, JSON, etc.)
Key Objects:
| Object | Description |
|---|---|
| AP DataPipe Engine - MySQL | GUI-based extraction from MySQL-compatible databases with visual field mapping |
| AP DataPipe Engine - File | GUI-based extraction from file sources |
| Python 3.12 DataPipe Engine | Code-driven extraction for custom data source handling |
Note: The AP DataPipe Engine provides a GUI-based configuration form for mapping source and destination tables, enabling rapid setup for common ETL tasks.
Transform and Prepare
These objects focus on refining data before it is used for analytics or modeling. They support data cleaning, format standardization, index generation, data type conversion, date processing, and custom SQL transformations. GUI-based tools allow for quick configuration, while SQL objects provide full scripting control.
Key Objects:
| Object | Description |
|---|---|
| AP Prepared Table | Converts raw database tables into datasets that support field-level transformation and analysis |
| AP Transform String to Binary | Converts text fields to binary representations |
| AP Transform String to Numeric | Converts text fields to numeric values |
| AP Transform Dates to Numeric | Converts date fields to numeric timestamps or components |
| AP SQL Code Execution | Executes custom SQL logic as a standalone process within the pipeline |
| AP Model Render | Generates formatted outputs from model results |
| SingularAI Text Splitter | Segments text content into smaller chunks for processing |
AI and Machine Learning
This category offers both AutoML (Automated Machine Learning) tools and custom development environments for machine learning. AutoML tools include GUI-driven workflows for training, deploying, and predicting with models. Dedicated GPU (Graphics Processing Unit) environments are available for high-performance training workloads.
Key Objects:
| Object | Description |
|---|---|
| AP AutoML Engine | Low-code model development with visual workflows for standard datasets |
| AP AutoML GPU Engine | GPU-accelerated training for larger datasets and complex models |
| SingularAI Text Embeddings | Generates vector representations of text for semantic analysis |
| AP Generative AI Workflow | Integrates generative AI capabilities into pipelines |
When to use GPU: Choose the AP AutoML GPU Engine when working with datasets exceeding 100,000 rows, deep learning models, or when training time on the standard engine exceeds acceptable thresholds.
High Performance Computing
These are open development environments that allow teams to write and execute custom code using supported languages. Ideal for advanced data processing, ML model development, API services, and custom application logic.
Key Objects:
| Object | Recommended Use Case |
|---|---|
| PHP 7.4 Application | Legacy PHP applications requiring older dependencies |
| PHP 8.2 Application | Modern PHP development with latest language features (recommended for new projects) |
| Python 3.8 Advanced ML Application | Machine learning workflows requiring scikit-learn, TensorFlow, or PyTorch |
| Python 3.8 Advanced ML & Plotly | ML workflows with interactive visualization requirements |
| Python 3 FastAPI | Building REST API endpoints with high performance |
| Python 3.9 Google Cloud Speech | Audio transcription and speech-to-text applications |
Version Selection Guidance: For new projects, use PHP 8.2 for PHP applications and Python 3.8+ for ML workloads. Older versions are maintained for backward compatibility with existing projects.
Notebooks
The Reasoning Flows Notebooks object provides an interactive Python development environment directly within the platform. Notebooks are ideal for:
- Data Exploration – Inspect datasets, generate statistics, and visualize distributions before building pipelines.
- Prototyping – Test transformation logic or model approaches before formalizing them into production objects.
- Documentation – Combine code, outputs, and markdown explanations in a single shareable document.
- Collaboration – Share exploratory analysis with team members who can view and build upon your work.
Notebooks support standard Python libraries and have access to the project's data repository, allowing seamless transition from exploration to production workflows.
Notification Engine
Allows configuration and execution of custom notifications using email services. These objects can be triggered within workflows to send status updates, alerts, or summaries.
Key Object:
- AP Notification Engine
Requirements:
- Mailgun API key (currently the supported email service provider)
Common Use Cases:
- Pipeline completion or failure alerts
- Scheduled report delivery
- Threshold-based warnings (e.g., data quality issues)
Webhook Sender
This object type enables integration with external systems via webhook calls. Users can define webhook URLs and payloads to trigger downstream services based on events occurring within Reasoning Flows.
Key Object:
- AP Webhook Sender
Common Use Cases:
- Triggering external automation tools (Zapier, Make, etc.)
- Notifying third-party systems of pipeline completion
- Initiating downstream processes in other platforms
Development Interface
Each object within Reasoning Flows features a robust development interface that includes:
- Global Files – Shared code or libraries accessible by multiple objects in the same project.
- Data Repository Access – Direct integration with the project's internal data tables and schemas.
- Execution and Scheduling – Support for on-demand or scheduled runs of objects and workflows.
- Project and Object Cloning – Rapid duplication of entire projects or individual objects for reuse.
- Dynamic Parameter Configuration – Runtime parameter injection to support flexible and scalable executions.
Deployment and Compute Infrastructure
Reasoning Flows runs on a containerized infrastructure based on Docker. It offers two types of compute resource configurations:
| Configuration | Description | Best For |
|---|---|---|
| Shared Container Resources | Cost-effective compute managed across tenants | Development, testing, light production workloads |
| Dedicated Container Resources | Reserved compute environments with guaranteed performance | Enterprise-scale production, time-sensitive pipelines |
Summary
Reasoning Flows is a unified environment for data development, supporting both GUI-based and code-driven workflows. It empowers teams to extract insights from data, build intelligent applications, and deploy solutions across the organization. Whether handling structured transformations, deploying AI models, or integrating with external APIs, Reasoning Flows provides the tools and infrastructure to deliver robust, scalable outcomes.
Updated about 21 hours ago
