# 20251114

## Conclusion

DataView is now conceptualized as a thin, ephemeral, developer-facing object, similar to .NET’s `SqlCommand`:

* Holds a query definition and the results of that query as a lightweight, row-major cache.
* Can be registered with DataLens for orchestration in the future, allowing automatic query refresh and batched update commits once double-buffering is implemented.
* Registered DataViews will support optional priority for deterministic batching; currently, operations are handled manually or via callbacks.
* Non-registered DataViews or manual Query/Update/Commit operations are handled via a locking/buffering model once the orchestration layer is implemented.
* DataLens currently provides serialize/deserialize functionality for DataStores, ensuring schema-aware memory representation and type-safe conversion.

Phase 1 work so far has confirmed that:

* Column names can be used to reconcile schema changes automatically (columns added/removed or with changed types).
* Type conversions between old and new column types can be applied safely during `ConvertToSchema`.
* DataLens can deliver row-major, engine-facing views on top of column-major, cache-aware DataStore layouts.

***

## Detailed Notes and Thought Process

**1. DataView Purpose**

* Acts as a transactional, developer-facing cache.
* Contains the results of queries and staged updates (future implementation).
* Provides optional orchestration through registration with DataLens (future work).
* Does not own memory; underlying buffers and operations are managed by DataLens/DataStore.

**2. Registered vs Non-Registered Views (Planned)**

* Registered DataViews: will eventually support automatic refresh/commit each tick with optional priority.
* Non-registered / manual DataViews: operations can be executed on demand; currently the system only provides low-level DataLens hooks.

**3. Query and Update Model (Planned)**

* Queries: will return row-major results from DataStores.
* Updates: will be staged through DataView and applied via DataLens.
* Commit: will support batching updates per-store or globally once orchestration is implemented.

**4. Transactional and Cache Behavior (Planned)**

* Updates will be staged until commit.
* Queries will operate on cached results.
* DataLens will ensure thread safety and determinism during execution.

**5. Callback Mechanism**

* All DataView operations will expose callbacks for completion once orchestration is implemented.

**6. Schema and Type Conversion**

* `ConvertToSchema` reconciles DataStore dumps to match the in-memory schema, handling:
  * Column addition/removal.
  * Data type changes with conversion.
  * Default values for new columns.
* This allows developers to modify schema without breaking DataLens/DataView operations.

**7. Analogy (Planned)**

* Registered DataViews: auto-refreshing, transactional `SqlCommand` with optional prioritization.
* Non-registered DataViews: manual `SqlCommand` executed on demand, subject to DataLens scheduling.

***

## Implemented So Far

* `DataLens` serialize/deserialize for DataStores.
* Schema-aware `ConvertToSchema` in `DataStore`, automatically reconciling columns and types.
* Stubbed DataView object supporting basic query/update/commit interface (no orchestration yet).
* Verified row-major DataView access over column-major DataStore layout with minimal overhead.

***

## Next Steps

* Implement double-buffered read/write support in DataLens for registered DataViews.
* Define full C++ and C# APIs for DataView (registration, query, update, commit, callbacks).
* Implement priority and tick frequency metadata for orchestrated DataViews.
* Build sample tests for row-major queries and updates against column-major DataStore to validate performance and thread safety.
