Reasoning Flows

Reasoning Flows Overview

Reasoning Flows is an integrated development platform designed to enable teams to build, deploy, and manage data-driven solutions with high efficiency. It supports a wide range of use cases—from machine learning models and ETL (Extract, Transform, Load) pipelines to custom APIs, web applications, and automated data workflows.

Built with a flexible architecture, Reasoning Flows combines GUI-based tools with open coding environments, providing both low-code and pro-code capabilities. It leverages Docker containerization for scalable compute, supports scheduling and orchestration of data tasks, and facilitates collaboration through reusable assets and project cloning features.

Terminology Note: Throughout this documentation, objects prefixed with "AP" (Analytics Platform) are GUI-driven components native to Reasoning Flows. Objects prefixed with "SingularAI" are specialized text intelligence components integrated into the platform.



Capabilities and Use Cases

Reasoning Flows enables the development of solutions that span:

  • Machine learning model training and prediction pipelines
  • Data extraction, transformation, and loading (ETL) from various sources
  • Custom application and API development
  • Text intelligence using embeddings, classification, and segmentation
  • Automated notification systems (e.g., email alerts)
  • Real-time or scheduled execution of complex workflows

Key Concepts

Before exploring the platform's capabilities, it's helpful to understand two foundational concepts:

Projects are containers that organize related objects, datasets, and executions. Projects help teams structure their workflows logically (for example, separating data pipelines, APIs, or AI models) and make large environments easier to manage.

Objects are the individual building blocks within a project. Each object represents a discrete functional unit—such as a data extraction task, a transformation step, a machine learning model, or a notification trigger. Objects can be configured through GUI forms or custom code, and multiple objects can be chained together to form complete workflows.


Projects in Reasoning Flows

Every Reasoning Flow begins within a project.

Project Type

Defines how a flow runs or is triggered:

  • Batch Process – Runs manually or on a schedule. Ideal for ETL pipelines, model training, or reporting.
  • Interactive Process – Runs on demand via a user interface that requests parameters.
  • API Calls – Exposed as an endpoint and executed through REST (Representational State Transfer) API calls.

Tip: Most operational data and ML pipelines are configured as Batch Processes.

Project Category

Used for labeling and organizing projects within the platform. Categories don't affect workflow execution—they're purely organizational for navigation and filtering.

Available categories include:

  • Data Pipeline – For ingestion and extraction workflows.
  • Data Preparation & Transform – For data cleaning, formatting, or preprocessing.
  • AI/ML Workflow – For training, deploying, or predicting with ML models.
  • Processes Workflow – For orchestrating multiple logic layers or dependencies across objects.
  • Big Data Process – For distributed or parallel data tasks.
  • Notifications – For automated alert or messaging workflows.
  • Sequential Code Process – For chained execution or custom scripts.
  • API – For services exposed to external systems.
  • Data Apps – For interactive applications with user-facing interfaces.

Note: Project categories serve as logical groupings. They don't impact runtime behavior or logic flow but simplify workspace management.


Project Type and Category Alignment

Project TypeTypical CategoriesDescription / Common Use Cases
Batch Process (Runs manually or by scheduler)Data Pipeline, Data Preparation & Transform, AI/ML Workflow, Big Data Process, Sequential Code Process, NotificationsMost common type. Used for scheduled or on-demand data workflows, ETL jobs, model training, or automated alerts.
Interactive Process (Runs manually from a UI with user inputs)AI/ML Workflow, Data Preparation & Transform, Processes Workflow, Data AppsBest for flows requiring user interaction—e.g., parameterized analytics runs, experiment triggers, or UI-based tools.
API Calls (Runs via REST API endpoints)API, AI/ML Workflow, Notifications, Processes WorkflowUsed to expose flows as services or endpoints—e.g., prediction APIs, webhook-triggered tasks, or on-demand automations.

Examples

ScenarioProject TypeCategory
Nightly ETL to refresh a dashboardBatch ProcessData Pipeline
Cleaning and preparing a dataset before trainingBatch ProcessData Preparation & Transform
Training an AutoML (Automated Machine Learning) model with user-defined parametersInteractive ProcessAI/ML Workflow
Internal tool that runs custom visualizations based on user inputsInteractive ProcessData Apps
Real-time prediction endpoint for an external systemAPI CallsAPI
Slack or email alert when a model completes trainingBatch Process or API CallsNotifications

Tip: Any combination of project type and category is technically valid—these alignments simply reflect common best practices for keeping environments structured and intuitive.


Object Categories in Reasoning Flows

Objects are grouped into functional categories based on their purpose within a workflow. Each category contains specialized objects designed for specific tasks, from data extraction to machine learning to external integrations.

Extract and Load

This category enables automated data extraction from registered data sources and supports loading into tables managed within the Reasoning Flows platform. These objects may use direct table-to-table transfers or custom SQL queries to define the scope of data retrieval.

Supported Data Sources:

  • MySQL-compatible databases (primary support)
  • File-based sources (CSV, JSON, etc.)

Key Objects:

ObjectDescription
AP DataPipe Engine - MySQLGUI-based extraction from MySQL-compatible databases with visual field mapping
AP DataPipe Engine - FileGUI-based extraction from file sources
Python 3.12 DataPipe EngineCode-driven extraction for custom data source handling

Note: The AP DataPipe Engine provides a GUI-based configuration form for mapping source and destination tables, enabling rapid setup for common ETL tasks.


Transform and Prepare

These objects focus on refining data before it is used for analytics or modeling. They support data cleaning, format standardization, index generation, data type conversion, date processing, and custom SQL transformations. GUI-based tools allow for quick configuration, while SQL objects provide full scripting control.

Key Objects:

ObjectDescription
AP Prepared TableConverts raw database tables into datasets that support field-level transformation and analysis
AP Transform String to BinaryConverts text fields to binary representations
AP Transform String to NumericConverts text fields to numeric values
AP Transform Dates to NumericConverts date fields to numeric timestamps or components
AP SQL Code ExecutionExecutes custom SQL logic as a standalone process within the pipeline
AP Model RenderGenerates formatted outputs from model results
SingularAI Text SplitterSegments text content into smaller chunks for processing

AI and Machine Learning

This category offers both AutoML (Automated Machine Learning) tools and custom development environments for machine learning. AutoML tools include GUI-driven workflows for training, deploying, and predicting with models. Dedicated GPU (Graphics Processing Unit) environments are available for high-performance training workloads.

Key Objects:

ObjectDescription
AP AutoML EngineLow-code model development with visual workflows for standard datasets
AP AutoML GPU EngineGPU-accelerated training for larger datasets and complex models
SingularAI Text EmbeddingsGenerates vector representations of text for semantic analysis
AP Generative AI WorkflowIntegrates generative AI capabilities into pipelines

When to use GPU: Choose the AP AutoML GPU Engine when working with datasets exceeding 100,000 rows, deep learning models, or when training time on the standard engine exceeds acceptable thresholds.


High Performance Computing

These are open development environments that allow teams to write and execute custom code using supported languages. Ideal for advanced data processing, ML model development, API services, and custom application logic.

Key Objects:

ObjectRecommended Use Case
PHP 7.4 ApplicationLegacy PHP applications requiring older dependencies
PHP 8.2 ApplicationModern PHP development with latest language features (recommended for new projects)
Python 3.8 Advanced ML ApplicationMachine learning workflows requiring scikit-learn, TensorFlow, or PyTorch
Python 3.8 Advanced ML & PlotlyML workflows with interactive visualization requirements
Python 3 FastAPIBuilding REST API endpoints with high performance
Python 3.9 Google Cloud SpeechAudio transcription and speech-to-text applications

Version Selection Guidance: For new projects, use PHP 8.2 for PHP applications and Python 3.8+ for ML workloads. Older versions are maintained for backward compatibility with existing projects.


Notebooks

The Reasoning Flows Notebooks object provides an interactive Python development environment directly within the platform. Notebooks are ideal for:

  • Data Exploration – Inspect datasets, generate statistics, and visualize distributions before building pipelines.
  • Prototyping – Test transformation logic or model approaches before formalizing them into production objects.
  • Documentation – Combine code, outputs, and markdown explanations in a single shareable document.
  • Collaboration – Share exploratory analysis with team members who can view and build upon your work.

Notebooks support standard Python libraries and have access to the project's data repository, allowing seamless transition from exploration to production workflows.


Notification Engine

Allows configuration and execution of custom notifications using email services. These objects can be triggered within workflows to send status updates, alerts, or summaries.

Key Object:

  • AP Notification Engine

Requirements:

  • Mailgun API key (currently the supported email service provider)

Common Use Cases:

  • Pipeline completion or failure alerts
  • Scheduled report delivery
  • Threshold-based warnings (e.g., data quality issues)

Webhook Sender

This object type enables integration with external systems via webhook calls. Users can define webhook URLs and payloads to trigger downstream services based on events occurring within Reasoning Flows.

Key Object:

  • AP Webhook Sender

Common Use Cases:

  • Triggering external automation tools (Zapier, Make, etc.)
  • Notifying third-party systems of pipeline completion
  • Initiating downstream processes in other platforms

Development Interface

Each object within Reasoning Flows features a robust development interface that includes:

  • Global Files – Shared code or libraries accessible by multiple objects in the same project.
  • Data Repository Access – Direct integration with the project's internal data tables and schemas.
  • Execution and Scheduling – Support for on-demand or scheduled runs of objects and workflows.
  • Project and Object Cloning – Rapid duplication of entire projects or individual objects for reuse.
  • Dynamic Parameter Configuration – Runtime parameter injection to support flexible and scalable executions.

Deployment and Compute Infrastructure

Reasoning Flows runs on a containerized infrastructure based on Docker. It offers two types of compute resource configurations:

ConfigurationDescriptionBest For
Shared Container ResourcesCost-effective compute managed across tenantsDevelopment, testing, light production workloads
Dedicated Container ResourcesReserved compute environments with guaranteed performanceEnterprise-scale production, time-sensitive pipelines

Summary

Reasoning Flows is a unified environment for data development, supporting both GUI-based and code-driven workflows. It empowers teams to extract insights from data, build intelligent applications, and deploy solutions across the organization. Whether handling structured transformations, deploying AI models, or integrating with external APIs, Reasoning Flows provides the tools and infrastructure to deliver robust, scalable outcomes.