Connection Management
Configure and manage connections to 30+ data source types with per-connection settings, credentials, and start-at-startup capabilities.
Subjective Technologies
A comprehensive guide to operating Subjective and VirtualGlass — from initial setup to advanced monitoring and troubleshooting.
Subjective Technologies is a desktop application built with PyQt5 that serves as a universal data source management platform. It enables you to connect, monitor, and orchestrate data from dozens of sources — cloud services, local folders, databases, APIs, and more — through a unified interface.
Configure and manage connections to 30+ data source types with per-connection settings, credentials, and start-at-startup capabilities.
Drag-and-drop pipeline builder for creating multi-step data processing workflows with node connections and visual orchestration.
App Store-style marketplace for discovering, installing, and managing data source plugins fetched directly from GitHub.
Companion C++ overlay application for cross-device connectivity with QR-code based device linking and KVM support.
Browse timestamped data snapshots with a JSON viewer, source code display, and database management tools.
Color-coded, filterable log viewer with tabular display for real-time monitoring and debugging of all data sources.
When you launch Subjective, the application performs the following startup sequence:
Reads `subjective.conf` from the project root, resolving all paths for tools, logs, and user data.
Starts an embedded Redis server (or connects to an external one) for inter-process messaging between the UI and data source services.
A one-time privacy consent dialog appears. You must accept to continue.
The local DataSourceManager service starts, registering all installed plugins and making them available for connections.
The VirtualGlass overlay application launches automatically in the background.
The Subjective icon appears in your system tray. Right-click it to access all features. The main Connections window is hidden by default.
After a 3-second delay, any connections marked with “Start at Startup” are automatically started.
The Connections window is the central hub for configuring all your data source connections. It is divided into two main panes:
The left pane contains the Connection Edit Form where you configure individual data source connections. Fields include:
Select the target server (IP address) where data will be processed.
Choose from installed data source plugins (populated dynamically from the server).
A user-friendly label for this connection.
Check to auto-start this connection when the application launches.
Each data source plugin provides its own configuration fields (API keys, paths, tokens, intervals, etc.).
The right pane displays all configured connections. Each connection row shows:
Each row identifies the connection at a glance with its iconography.
Start or stop the data source process.
Shows processing progress for running data sources.
Timestamp of last run, plugin class name.
Below the edit form, the Pipelines section lets you browse and select saved pipeline files (.pipe) to add them as executable pipeline connections in the list. Selecting a pipeline creates a SubjectivePipelineDataSource connection entry.
Subjective allows you to create your own custom data source plugins. Use the “New DataSource” toolbar button to scaffold a new plugin project with all required files.
Click “New DataSource” from the toolbar (grid icon with a plus sign).
Enter a name for your new data source in the popup dialog (e.g., “my_custom_api”).
Click OK — The system generates a complete Python plugin template under the plugins directory, including the data source class, setup.py, icon template, and documentation skeleton.
Edit the generated code to implement your custom data collection logic.
The Plugin Store provides an App Store-like experience for discovering and installing data source plugins. Plugins are fetched from the Subjective Technologies GitHub organization and displayed in a searchable grid.
Filter plugins by name, class, or description using the search bar.
Each plugin displays a community rating.
Green “Installed” badge or blue “Install” button for each plugin.
Purge the local GitHub API cache for fresh results.
Force re-fetch all plugin data from GitHub.
Available plugins include connectors for: GitHub, GitLab, Gmail, YouTube, Evernote, SFTP, Google Drive, Dropbox, OneDrive, S3, Azure, Slack, Notion, MongoDB, Redis, Elasticsearch, PostgreSQL, MySQL, Kafka, RabbitMQ, and many more.
Clicking on any plugin card opens a detailed view with full information about the plugin:
Custom SVG icon for the data source.
The Python class name (e.g., `subjective_sftp_datasource`).
Star rating with the total number of ratings.
Full plugin description and capabilities.
Rendered README content from the plugin's GitHub repository.
Install the plugin to your local plugins directory.
Open the plugin's source repository in your browser.
The Pipeline Editor is a powerful visual tool for creating data processing workflows. You build pipelines by dragging connections from the left pane onto the workbench canvas and connecting them with arrows to define data flow.
A tree view of all configured connections grouped by server IP. Each server node shows an “Install Plugin” button to open the Plugin Store, and a “Create New Connection” option. Drag any connection item onto the workbench to add it as a pipeline node.
The visual canvas where you compose your pipeline. Nodes appear as icons representing their data source type. Connect nodes by dragging arrows between connection points (top/bottom/left/right). The workbench supports:
Scroll wheel or Ctrl + + / -.
Middle-click drag.
Ctrl + F or the “Fit” button.
Ctrl + 0.
Select and press Delete.
When a node is selected on the workbench, this panel shows its editable properties including the connection name, data source parameters, filter expressions, and transform configurations.
Pipelines are saved in a hybrid JSON format (.pipe) that combines:
Node definitions, dependencies, parameters, filters, and transforms.
Node positions, connection points, and display names for the workbench.
| Action | Shortcut | Description |
|---|---|---|
| New Pipeline | Ctrl + N | Clear the workbench and start a new pipeline |
| Open Pipeline | Ctrl + O | Load a .pipe or .spipeline file |
| Save Pipeline | Ctrl + S | Save to the current file (prompts for name) |
| Save As | Ctrl + Shift + S | Save with a new file name |
| Export as JSON | — | Export for use with SubjectivePipelineDataSource |
Use Ctrl + O or File → Open Pipeline to load an existing pipeline from disk. The file dialog filters for .pipe and .spipeline files by default.
Pipelines are stored by default under `com_subjective_userdata/com_subjective_pipelines/`. The editor supports three file formats:
Contains both execution nodes and visual workbench layout.
Older .pipe files with workbench-only data (auto-upgraded on load).
Flat execution-only format for SubjectivePipelineDataSource.
Recently opened files are tracked under File → Recent Files for quick access.
VirtualGlass is a companion C++ application that provides an overlay interface and cross-device connectivity. It launches automatically on startup and can be toggled by left-clicking the system tray icon.
To pair a remote device with this computer, use Link VirtualGlass from the tray menu. This opens a dialog with a QR code and a text-based linking code:
Open the Link dialog from the tray menu: Link VirtualGlass.
Scan the QR code with your mobile device or enter the linking code manually on the remote device.
Click “Link Device” to complete the pairing process.
Use “Link to Main Device...” from the tray menu to connect this computer as a player to another computer acting as the main device. This enables KVM (keyboard-video-mouse) input sharing between machines using the `input_unified` tool. The connection is persisted and auto-restored on startup.
The Log Viewer provides real-time monitoring of all data source processes. Logs are displayed in a color-coded table format with filtering capabilities.
| Column | Description |
|---|---|
| Timestamp | When the log entry was recorded |
| Log_Type | Severity level (INFO, WARNING, ERROR, DEBUG) |
| Process | The data source or system component that generated the log |
| Code_Location | Source file and line number for debugging |
| Message | The log message content |
| Processing_Time | Execution duration for the logged operation |
The Log Viewer includes a powerful 3D Performance Profiler accessible via the Charts tab. This tool parses log files to extract execution timing data and renders an interactive 3D surface chart that makes it easy to spot data source bottlenecks and performance regressions at a glance.
| Axis | Description |
|---|---|
| X — Operation Category | Groups of operations performed by the data source (e.g., network I/O, parsing, database writes). The top N categories are shown (configurable via the Top Categories field). |
| Y — Execution Time (ms) | The measured duration for each operation. Peaks in the surface indicate slow operations that are potential bottlenecks. |
| Z — Time Progression | Chronological progression showing how performance changes over time, revealing trends, spikes, or degradation. |
Fast execution (healthy performance).
Moderate execution time (potential concern).
Slow execution (bottleneck detected).
Open the Log Viewer from the tray menu or toolbar, and select a log file for the data source you want to analyze.
Switch to the Charts tab at the top of the viewer.
Set the Top Categories count (default: 15) to control how many operation categories are displayed on the X-axis.
Click “Generate Performance Chart” to parse the log and render the 3D surface. The generated chart is saved as an HTML file under `com_subjective_userdata/log_charts/`.
Interact with the chart — rotate, zoom, and pan the 3D view to inspect specific areas. Click “Open in Browser” for a full-screen interactive view.
Tip: Use the profiler after running a data source to identify which operations consume the most time. Red peaks on the surface are your primary optimization targets. Compare charts from different runs to verify that performance improvements are effective.
Logs are stored in the directory configured by `LOG_PATH` (default: `com_subjective_userdata/com_subjective_logs/`). Select a log file from the dropdown at the top to view its contents. Use the filter controls to narrow down by severity, process name, or keyword.
The Snapshot Browser allows you to explore timestamped data snapshots captured by your data sources. Each snapshot contains structured data collected at a specific point in time.
Chronological list of snapshot timestamps. Click any entry to view its contents.
A JSON viewer that renders the snapshot data with syntax highlighting. Supports switching between formatted and raw views.
Toggle Overlay Mode, Clear Viewer, and switch between Dark/Light mode.
Database statistics including total size, log size, and storage usage gauge (shown as a percentage circle).
The bottom of the Snapshot Browser shows storage information and provides Backup and Trash buttons for both the database and logs. The circular gauge indicates the percentage of allowed storage used.
The Chat with OnDemand Data Sources tool provides a live testing interface for interacting with running LLM-based data sources. It lets you send messages directly to any active chatbot connection and inspect responses in real time — useful for verifying that your AI data source connections are working correctly before integrating them into pipelines.
The tool auto-discovers all running OnDemand data sources. These are AI/LLM chatbot connectors that respond to messages on demand. Currently supported providers include:
Lists all currently running OnDemand data source processes. Each entry shows:
The user-defined label (e.g., `chat_gemini`).
The plugin class handling the connection.
Process ID and launcher PID for debugging.
Click any entry to select it as the active chat target. Use Refresh List to re-scan for newly started or stopped data sources.
The main chat area shows the conversation with the selected data source. The header displays the active connection name and PID. Features include:
Timestamped connection events and message exchanges.
Send file attachments along with your message (for multimodal models that support it).
Remove queued file attachments before sending.
Type your test prompt and click Send.
Reset the conversation history.
Start your LLM connections from the Connections window by clicking the Play button on each chatbot data source you want to test.
Open the tool from the toolbar (OnDemand DataSource icon) or via the tray menu.
Select a running data source from the left panel — the tool connects automatically and shows a confirmation in the chat log.
Type a test message in the input field and click Send to verify the connection responds correctly.
Switch between data sources by clicking different entries in the left panel to test multiple LLMs in the same session.
Subjective is configured via `subjective.conf` (or `subjective_linux.conf` on Linux). Key configuration options:
| Key | Default | Description |
|---|---|---|
| USERDATA_PATH | com_subjective_userdata | Base path for all user data, plugins, snapshots, and logs |
| LOG_PATH | com_subjective_userdata/com_subjective_logs | Directory for log files |
| PIPELINES_PATH | com_subjective_userdata/com_subjective_pipelines | Directory for saved pipeline files |
| SNAPSHOTS_DIR | com_subjective_userdata/com_subjective_snapshots | Directory for data snapshots |
| REDIS_SERVER_IP | localhost | Redis server hostname |
| REDIS_SERVER_PORT | 6379 | Redis server port |
| REDIS_EMBEDDED | true | Use embedded Redis (recommended for single-machine setups) |
| CURRENT_THEME_SELECTED | theme_default.json | Active UI theme file |
| LOG_ENABLE_FILES | false | Write logs to individual files per source |
| LOG_ENABLE_TERMINAL_OUTPUT | true | Mirror log output to the terminal console |
| SUBJECTIVE_CLIENT_PATH | (auto-detected) | Override path to the VirtualGlass executable |
| KVM_KEYBOARD_PATH | (auto-detected) | Path to the KVM input_unified tool |
| GITHUB_TOKEN | (none) | GitHub personal access token for Plugin Store API calls |
| FFMPEG_PATH | (auto-detected) | Path to FFmpeg for multimedia processing plugins |
| RCLONE_PATH | (auto-detected) | Path to rclone for cloud storage plugins |
Tip: Paths can be absolute or relative to the project root. Environment variables (`$HOME`, `%USERPROFILE%`) and `~` are expanded automatically. The application also performs recursive path resolution for tool binaries.
Check the VirtualGlass log file in your logs directory (timestamped as *-virtual_glass-launcher.log). Verify that the executable path in SUBJECTIVE_CLIENT_PATH is correct, or let auto-detection find it under com_subjective_tools/subjective_client_desktop/build/.
If you see Redis connection errors, ensure REDIS_EMBEDDED=true in your config. The embedded Redis is recommended for local development. For external Redis, verify the host and port settings.
This is usually caused by GitHub API rate limiting (60 requests/hour for unauthenticated access). Set a GITHUB_TOKEN in your configuration or environment to increase the limit to 5,000 requests/hour.
Ensure the plugin is properly installed (check the Plugin Store), then restart the DataSourceManager or use the Plugin Store's Refresh button. The dropdown is populated by querying the running DataSourceManager service via Redis.
Check that the PIPELINES_PATH directory exists and is writable. The editor uses a fallback save mechanism if the pipeline persistence system module is unavailable.