Subjective Technologies

User Manual

A comprehensive guide to operating Subjective and VirtualGlass — from initial setup to advanced monitoring and troubleshooting.

Application Overview

Subjective Technologies is a desktop application built with PyQt5 that serves as a universal data source management platform. It enables you to connect, monitor, and orchestrate data from dozens of sources — cloud services, local folders, databases, APIs, and more — through a unified interface.

Connection Management

Configure and manage connections to 30+ data source types with per-connection settings, credentials, and start-at-startup capabilities.

Visual Pipeline Editor

Drag-and-drop pipeline builder for creating multi-step data processing workflows with node connections and visual orchestration.

Plugin Store

App Store-style marketplace for discovering, installing, and managing data source plugins fetched directly from GitHub.

VirtualGlass

Companion C++ overlay application for cross-device connectivity with QR-code based device linking and KVM support.

Snapshot Browser

Browse timestamped data snapshots with a JSON viewer, source code display, and database management tools.

Log Viewer

Color-coded, filterable log viewer with tabular display for real-time monitoring and debugging of all data sources.

Getting Started

When you launch Subjective, the application performs the following startup sequence:

Configuration Loading

Reads `subjective.conf` from the project root, resolving all paths for tools, logs, and user data.

Redis Initialization

Starts an embedded Redis server (or connects to an external one) for inter-process messaging between the UI and data source services.

Privacy Dialog

A one-time privacy consent dialog appears. You must accept to continue.

Data Source Manager

The local DataSourceManager service starts, registering all installed plugins and making them available for connections.

VirtualGlass Launch

The VirtualGlass overlay application launches automatically in the background.

System Tray

The Subjective icon appears in your system tray. Right-click it to access all features. The main Connections window is hidden by default.

Auto-Start Connections

After a 3-second delay, any connections marked with “Start at Startup” are automatically started.

System Tray Menu

The system tray icon is your primary entry point to all Subjective features. Right-click it to see the full context menu:

Figure 1: System Tray context menu showing all available features

Managing Connections

The Connections window is the central hub for configuring all your data source connections. It is divided into two main panes:

Figure 2: Main Connections window — Left: Connection Edit Form / Right: Connection List with controls

Left Pane: Data Source Repositories

The left pane contains the Connection Edit Form where you configure individual data source connections. Fields include:

Server

Select the target server (IP address) where data will be processed.

Data Source

Choose from installed data source plugins (populated dynamically from the server).

Connection Name

A user-friendly label for this connection.

Start at Startup

Check to auto-start this connection when the application launches.

Dynamic Fields

Each data source plugin provides its own configuration fields (API keys, paths, tokens, intervals, etc.).

Right Pane: Connections List

The right pane displays all configured connections. Each connection row shows:

Data source icon and name

Each row identifies the connection at a glance with its iconography.

Play/Stop button

Start or stop the data source process.

Progress bar

Shows processing progress for running data sources.

Metadata indicators

Timestamp of last run, plugin class name.

Bottom: Pipelines Section

Below the edit form, the Pipelines section lets you browse and select saved pipeline files (.pipe) to add them as executable pipeline connections in the list. Selecting a pipeline creates a SubjectivePipelineDataSource connection entry.

Creating New Data Sources

Subjective allows you to create your own custom data source plugins. Use the “New DataSource” toolbar button to scaffold a new plugin project with all required files.

Figure 3: New Data Source dialog — Enter a name to scaffold a complete plugin template
01

Click “New DataSource” from the toolbar (grid icon with a plus sign).

02

Enter a name for your new data source in the popup dialog (e.g., “my_custom_api”).

03

Click OK — The system generates a complete Python plugin template under the plugins directory, including the data source class, setup.py, icon template, and documentation skeleton.

04

Edit the generated code to implement your custom data collection logic.

Plugin Store

The Plugin Store provides an App Store-like experience for discovering and installing data source plugins. Plugins are fetched from the Subjective Technologies GitHub organization and displayed in a searchable grid.

Figure 4: Plugin Store — Grid view of available data source plugins with install status and ratings

Features

Search

Filter plugins by name, class, or description using the search bar.

Star Ratings

Each plugin displays a community rating.

Install Status

Green “Installed” badge or blue “Install” button for each plugin.

Clear Cache

Purge the local GitHub API cache for fresh results.

Refresh

Force re-fetch all plugin data from GitHub.

Available plugins include connectors for: GitHub, GitLab, Gmail, YouTube, Evernote, SFTP, Google Drive, Dropbox, OneDrive, S3, Azure, Slack, Notion, MongoDB, Redis, Elasticsearch, PostgreSQL, MySQL, Kafka, RabbitMQ, and many more.

Plugin Details & Installation

Clicking on any plugin card opens a detailed view with full information about the plugin:

Figure 5: Plugin Detail View for “Subjective Sftp” — showing class name, rating, description, and action buttons

Detail View Contents

Plugin Icon

Custom SVG icon for the data source.

Class Name

The Python class name (e.g., `subjective_sftp_datasource`).

Rating

Star rating with the total number of ratings.

Description

Full plugin description and capabilities.

Documentation

Rendered README content from the plugin's GitHub repository.

Download button

Install the plugin to your local plugins directory.

View on GitHub button

Open the plugin's source repository in your browser.

Pipeline Editor

The Pipeline Editor is a powerful visual tool for creating data processing workflows. You build pipelines by dragging connections from the left pane onto the workbench canvas and connecting them with arrows to define data flow.

Figure 6: Pipeline Editor — Left: Available connections tree / Center: Visual workbench with connected nodes / Right: Node properties panel

Editor Layout (Three Panes)

Left Pane: Available Connections

A tree view of all configured connections grouped by server IP. Each server node shows an “Install Plugin” button to open the Plugin Store, and a “Create New Connection” option. Drag any connection item onto the workbench to add it as a pipeline node.

Center Pane: Pipeline Workbench

The visual canvas where you compose your pipeline. Nodes appear as icons representing their data source type. Connect nodes by dragging arrows between connection points (top/bottom/left/right). The workbench supports:

Zoom

Scroll wheel or Ctrl + + / -.

Pan

Middle-click drag.

Fit to View

Ctrl + F or the “Fit” button.

Reset Zoom

Ctrl + 0.

Delete Node

Select and press Delete.

Right Pane: Node Properties

When a node is selected on the workbench, this panel shows its editable properties including the connection name, data source parameters, filter expressions, and transform configurations.

Pipeline File Format

Pipelines are saved in a hybrid JSON format (.pipe) that combines:

Execution model

Node definitions, dependencies, parameters, filters, and transforms.

Visual layout

Node positions, connection points, and display names for the workbench.

Menu Bar Operations

ActionShortcutDescription
New PipelineCtrl + NClear the workbench and start a new pipeline
Open PipelineCtrl + OLoad a .pipe or .spipeline file
Save PipelineCtrl + SSave to the current file (prompts for name)
Save AsCtrl + Shift + SSave with a new file name
Export as JSONExport for use with SubjectivePipelineDataSource

Loading & Saving Pipelines

Use Ctrl + O or File → Open Pipeline to load an existing pipeline from disk. The file dialog filters for .pipe and .spipeline files by default.

Figure 7: File dialog for loading pipeline files from the pipelines directory

Pipelines are stored by default under `com_subjective_userdata/com_subjective_pipelines/`. The editor supports three file formats:

Hybrid format (.pipe)

Contains both execution nodes and visual workbench layout.

Legacy format

Older .pipe files with workbench-only data (auto-upgraded on load).

JSON export (.json)

Flat execution-only format for SubjectivePipelineDataSource.

Recently opened files are tracked under File → Recent Files for quick access.

VirtualGlass & Device Linking

VirtualGlass is a companion C++ application that provides an overlay interface and cross-device connectivity. It launches automatically on startup and can be toggled by left-clicking the system tray icon.

Figure 8: VirtualGlass device linking dialog with QR code and linking code

Linking a Device

To pair a remote device with this computer, use Link VirtualGlass from the tray menu. This opens a dialog with a QR code and a text-based linking code:

01

Open the Link dialog from the tray menu: Link VirtualGlass.

02

Scan the QR code with your mobile device or enter the linking code manually on the remote device.

03

Click “Link Device” to complete the pairing process.

Link to Main Device

Use “Link to Main Device...” from the tray menu to connect this computer as a player to another computer acting as the main device. This enables KVM (keyboard-video-mouse) input sharing between machines using the `input_unified` tool. The connection is persisted and auto-restored on startup.

Log Viewer

The Log Viewer provides real-time monitoring of all data source processes. Logs are displayed in a color-coded table format with filtering capabilities.

Figure 9: Log Viewer — color-coded tabular log display with timestamps, process names, code locations, and messages

Log Table Columns

ColumnDescription
TimestampWhen the log entry was recorded
Log_TypeSeverity level (INFO, WARNING, ERROR, DEBUG)
ProcessThe data source or system component that generated the log
Code_LocationSource file and line number for debugging
MessageThe log message content
Processing_TimeExecution duration for the logged operation

3D Performance Profiler

The Log Viewer includes a powerful 3D Performance Profiler accessible via the Charts tab. This tool parses log files to extract execution timing data and renders an interactive 3D surface chart that makes it easy to spot data source bottlenecks and performance regressions at a glance.

Figure 9b: 3D Performance Profiler — surface chart visualizing execution times across operation categories over time, with color gradient from green (fast) to red (slow)

Chart Axes

AxisDescription
X — Operation CategoryGroups of operations performed by the data source (e.g., network I/O, parsing, database writes). The top N categories are shown (configurable via the Top Categories field).
Y — Execution Time (ms)The measured duration for each operation. Peaks in the surface indicate slow operations that are potential bottlenecks.
Z — Time ProgressionChronological progression showing how performance changes over time, revealing trends, spikes, or degradation.

Color Gradient

Green

Fast execution (healthy performance).

Yellow

Moderate execution time (potential concern).

Red

Slow execution (bottleneck detected).

How to Use the Profiler

01

Open the Log Viewer from the tray menu or toolbar, and select a log file for the data source you want to analyze.

02

Switch to the Charts tab at the top of the viewer.

03

Set the Top Categories count (default: 15) to control how many operation categories are displayed on the X-axis.

04

Click “Generate Performance Chart” to parse the log and render the 3D surface. The generated chart is saved as an HTML file under `com_subjective_userdata/log_charts/`.

05

Interact with the chart — rotate, zoom, and pan the 3D view to inspect specific areas. Click “Open in Browser” for a full-screen interactive view.

Tip: Use the profiler after running a data source to identify which operations consume the most time. Red peaks on the surface are your primary optimization targets. Compare charts from different runs to verify that performance improvements are effective.

Logs are stored in the directory configured by `LOG_PATH` (default: `com_subjective_userdata/com_subjective_logs/`). Select a log file from the dropdown at the top to view its contents. Use the filter controls to narrow down by severity, process name, or keyword.

Snapshot Browser

The Snapshot Browser allows you to explore timestamped data snapshots captured by your data sources. Each snapshot contains structured data collected at a specific point in time.

Figure 10: Snapshot Browser — Left: Timestamped snapshot list / Right: JSON viewer with structured data display

Interface Layout

Left Panel

Chronological list of snapshot timestamps. Click any entry to view its contents.

Right Panel

A JSON viewer that renders the snapshot data with syntax highlighting. Supports switching between formatted and raw views.

Top Bar

Toggle Overlay Mode, Clear Viewer, and switch between Dark/Light mode.

Bottom Bar

Database statistics including total size, log size, and storage usage gauge (shown as a percentage circle).

Database Management

The bottom of the Snapshot Browser shows storage information and provides Backup and Trash buttons for both the database and logs. The circular gauge indicates the percentage of allowed storage used.

OnDemand Data Source Testing

The Chat with OnDemand Data Sources tool provides a live testing interface for interacting with running LLM-based data sources. It lets you send messages directly to any active chatbot connection and inspect responses in real time — useful for verifying that your AI data source connections are working correctly before integrating them into pipelines.

Figure 11: Chat with OnDemand Data Sources — testing interface showing running LLM connections (Grok, Gemini, ChatGPT, Claude, Llama, Qwen) with a live chat panel

Supported LLM Data Sources

The tool auto-discovers all running OnDemand data sources. These are AI/LLM chatbot connectors that respond to messages on demand. Currently supported providers include:

GrokGeminiChatGPTClaudeLlamaQwen

Interface Layout

Left Panel: Running Data Sources

Lists all currently running OnDemand data source processes. Each entry shows:

Connection name

The user-defined label (e.g., `chat_gemini`).

Type

The plugin class handling the connection.

PID

Process ID and launcher PID for debugging.

Click any entry to select it as the active chat target. Use Refresh List to re-scan for newly started or stopped data sources.

Right Panel: Chat Interface

The main chat area shows the conversation with the selected data source. The header displays the active connection name and PID. Features include:

Message log

Timestamped connection events and message exchanges.

Attach Files

Send file attachments along with your message (for multimodal models that support it).

Clear Attachments

Remove queued file attachments before sending.

Message input

Type your test prompt and click Send.

Clear Chat

Reset the conversation history.

How to Use

01

Start your LLM connections from the Connections window by clicking the Play button on each chatbot data source you want to test.

02

Open the tool from the toolbar (OnDemand DataSource icon) or via the tray menu.

03

Select a running data source from the left panel — the tool connects automatically and shows a confirmation in the chat log.

04

Type a test message in the input field and click Send to verify the connection responds correctly.

05

Switch between data sources by clicking different entries in the left panel to test multiple LLMs in the same session.

Configuration Reference

Subjective is configured via `subjective.conf` (or `subjective_linux.conf` on Linux). Key configuration options:

KeyDefaultDescription
USERDATA_PATHcom_subjective_userdataBase path for all user data, plugins, snapshots, and logs
LOG_PATHcom_subjective_userdata/com_subjective_logsDirectory for log files
PIPELINES_PATHcom_subjective_userdata/com_subjective_pipelinesDirectory for saved pipeline files
SNAPSHOTS_DIRcom_subjective_userdata/com_subjective_snapshotsDirectory for data snapshots
REDIS_SERVER_IPlocalhostRedis server hostname
REDIS_SERVER_PORT6379Redis server port
REDIS_EMBEDDEDtrueUse embedded Redis (recommended for single-machine setups)
CURRENT_THEME_SELECTEDtheme_default.jsonActive UI theme file
LOG_ENABLE_FILESfalseWrite logs to individual files per source
LOG_ENABLE_TERMINAL_OUTPUTtrueMirror log output to the terminal console
SUBJECTIVE_CLIENT_PATH(auto-detected)Override path to the VirtualGlass executable
KVM_KEYBOARD_PATH(auto-detected)Path to the KVM input_unified tool
GITHUB_TOKEN(none)GitHub personal access token for Plugin Store API calls
FFMPEG_PATH(auto-detected)Path to FFmpeg for multimedia processing plugins
RCLONE_PATH(auto-detected)Path to rclone for cloud storage plugins

Tip: Paths can be absolute or relative to the project root. Environment variables (`$HOME`, `%USERPROFILE%`) and `~` are expanded automatically. The application also performs recursive path resolution for tool binaries.

Troubleshooting

Common Issues

VirtualGlass won't launch

Check the VirtualGlass log file in your logs directory (timestamped as *-virtual_glass-launcher.log). Verify that the executable path in SUBJECTIVE_CLIENT_PATH is correct, or let auto-detection find it under com_subjective_tools/subjective_client_desktop/build/.

Redis connection errors

If you see Redis connection errors, ensure REDIS_EMBEDDED=true in your config. The embedded Redis is recommended for local development. For external Redis, verify the host and port settings.

Plugin Store shows no plugins

This is usually caused by GitHub API rate limiting (60 requests/hour for unauthenticated access). Set a GITHUB_TOKEN in your configuration or environment to increase the limit to 5,000 requests/hour.

Data source doesn't appear in the dropdown

Ensure the plugin is properly installed (check the Plugin Store), then restart the DataSourceManager or use the Plugin Store's Refresh button. The dropdown is populated by querying the running DataSourceManager service via Redis.

Pipeline editor can't save

Check that the PIPELINES_PATH directory exists and is writable. The editor uses a fallback save mechanism if the pipeline persistence system module is unavailable.