SoloDB Documentation
SoloDB is an embedded document database for .NET that stores your objects as JSON documents inside SQLite. It gives you the flexibility of a NoSQL database with the reliability of SQLite, all without running a separate server.
When to Use SoloDB
SoloDB is ideal for:
- Desktop applications that need local data persistence
- Mobile apps via .NET MAUI
- Web applications that don't need distributed databases
- Prototyping when you want to store objects without defining schemas
- Embedded systems where you need a reliable, self-contained database
Key Characteristics
- Documents stored as SQLite JSONB - binary JSON for efficient storage and querying
- Full LINQ support with compile-time type safety
- ACID transactions inherited from SQLite
- Thread-safe with built-in connection pooling
- Zero configuration - just create the database and start using it
Installation
Install from NuGet:
dotnet add package SoloDBRequirements
- .NET Standard 2.0 or 2.1 (compatible with .NET Framework 4.6.1+, .NET Core 2.0+, .NET 5+)
- Works on Windows, Linux, and macOS
First Steps
Let's store and retrieve some data. First, define a class for your data:
public class User
{
public long Id { get; set; } // This becomes the primary key
public string Name { get; set; }
public string Email { get; set; }
public DateTime CreatedAt { get; set; }
}Now create a database, get a collection, and perform operations:
using SoloDatabase;
// Create or open a database file
using var db = new SoloDB("myapp.db");
// Get a typed collection - creates it automatically if it doesn't exist
var users = db.GetCollection<User>();
// Insert a document
var user = new User
{
Name = "Alice",
Email = "alice@example.com",
CreatedAt = DateTime.UtcNow
};
users.Insert(user);
// user.Id is now set to the auto-generated value (e.g., 1)
// Query with LINQ
var found = users.FirstOrDefault(u => u.Email == "alice@example.com");
// Update
found.Name = "Alice Smith";
users.Update(found);
// Delete
users.Delete(found.Id);Note: The using statement ensures the database connection is properly closed when done. In long-running applications, you typically create one SoloDB instance and reuse it throughout the application's lifetime.
How Data is Stored
Understanding how SoloDB stores your data helps you design better models and write efficient queries.
The Storage Model
Each collection is a SQLite table with three columns:
Id- INTEGER PRIMARY KEY (auto-incremented by default)Value- JSONB containing your serialized objectMetadata- JSONB used internally by SoloDB (relation version tracking)
When you insert this object:
var user = new User { Name = "Alice", Email = "alice@example.com" };
users.Insert(user);SoloDB creates a row like this:
// Conceptually:
// Id: 1
// Value: {"Name":"Alice","Email":"alice@example.com","CreatedAt":"..."}JSONB Format
SoloDB uses SQLite's native JSONB (binary JSON) format, which means:
- Queries can use SQLite's JSON functions for efficient filtering
- No JSON parsing overhead on every read - binary format is faster
- You can even query the data using raw SQL with JSON functions
Serialization Rules Since v1.0.0
Understanding what gets serialized is crucial for designing your data models correctly.
Classes: Only Public Properties
For classes, SoloDB serializes public instance properties with getters. Fields are ignored.
public class Example
{
// SERIALIZED - public property with getter
public string Name { get; set; }
// SERIALIZED - public property (getter required, setter for deserialization)
public int Age { get; set; }
// NOT SERIALIZED - field (even if public)
public string PublicField;
// NOT SERIALIZED - private property
private string Secret { get; set; }
// NOT SERIALIZED - internal property
internal string Internal { get; set; }
}Important: For deserialization, properties need a public setter. Read-only properties can be serialized but won't be populated when loading from the database.
Structs: Fields and Properties
Structs behave differently - both public fields AND public properties are serialized:
public struct Point
{
public int X; // SERIALIZED - public field on struct
public int Y; // SERIALIZED - public field on struct
public double Distance { get; set; } // SERIALIZED - public property
}Supported Types
SoloDB's built-in serializer handles these types natively:
| Primitives | int, long, float, double, decimal, bool, char, byte, etc. |
| Strings | string (null-safe) |
| Date/Time | DateTime, DateTimeOffset, TimeSpan |
| GUIDs | Guid |
| Collections | Arrays, List<T>, Dictionary<K,V>, HashSet<T>, Queue<T>, Stack<T> |
| Nullable | Nullable<T> (e.g., int?, DateTime?) |
| Tuples | ValueTuple, Tuple |
| F# Types | F# records, discriminated unions, F# lists |
| Binary | byte[] (stored as Base64) |
| Nested Objects | Any class/struct following these rules |
Nested Objects
Objects can contain other objects to any depth:
public class Order
{
public long Id { get; set; }
public Customer Customer { get; set; } // Nested object
public List<OrderItem> Items { get; set; } // List of objects
public Address ShippingAddress { get; set; } // Another nested object
}
public class OrderItem
{
public string ProductName { get; set; }
public int Quantity { get; set; }
public decimal Price { get; set; }
}What to Avoid
- Circular references - will cause stack overflow
- Very deep nesting - impacts performance and query complexity
- Storing huge binary data - use the FileSystem API instead
Custom JSON Serializer
SoloDB uses its own high-performance JSON serializer instead of Newtonsoft.Json or System.Text.Json. The serializer is designed specifically for document database use cases:
- Generic type caching - Serializers are generated once per type and cached for subsequent use
- Polymorphic support - Automatically adds
$typediscriminator for non-sealed types when needed - F# native support - Records, discriminated unions, and F# lists are handled natively
- No external dependencies - Self-contained implementation with no JSON library dependencies
The serializer converts objects to an internal JsonValue representation which is then stored as SQLite JSONB. Deserialization reads the JSONB and maps it back to your types using the same rules described above.
ID Generation Since v1.0.0
Every document needs a unique identifier. SoloDB provides flexible options for ID handling.
Default: Auto-Increment Long
The simplest approach - name a property Id with type long:
public class Product
{
public long Id { get; set; } // Auto-detected as primary key
public string Name { get; set; }
}
var products = db.GetCollection<Product>();
var product = new Product { Name = "Widget" };
products.Insert(product);
// product.Id is now 1, 2, 3, etc.Custom ID with [SoloId] Attribute
For other ID types or custom generation logic, use the [SoloId] attribute with a custom generator:
using SoloDatabase.Attributes;
// Define a custom ID generator that produces string IDs from GUIDs
public class StringGuidIdGenerator : IIdGenerator<Document>
{
public object GenerateId(ISoloDBCollection<Document> collection, Document item)
{
return Guid.NewGuid().ToString("N"); // Returns string like "a1b2c3d4..."
}
public bool IsEmpty(object id)
{
return string.IsNullOrEmpty(id as string);
}
}
// Use it in your model
public class Document
{
[SoloId(typeof(StringGuidIdGenerator))]
public string Id { get; set; } // String ID, not Guid
public string Title { get; set; }
public string Content { get; set; }
}Supported ID Types
long | Default, auto-incremented by SQLite |
int | Auto-incremented (cast from SQLite's int64) |
Guid | Requires a generator (e.g., one that calls Guid.NewGuid()) |
string | Must be provided by your generator or set before insert |
Note: For long and int ID types without a custom generator, SQLite handles auto-incrementing automatically. You don't need to implement a generator for these common cases.
Custom Guid ID Generator Example
Here's a simple generator for Guid IDs:
public class GuidIdGenerator : IIdGenerator<MyDocument>
{
public object GenerateId(ISoloDBCollection<MyDocument> collection, MyDocument item)
{
return Guid.NewGuid();
}
public bool IsEmpty(object id) => id is Guid g && g == Guid.Empty;
}Working with Collections Since v1.0.0
Collections are containers for your documents, similar to tables in SQL databases.
Getting a Collection
// Typed collection - name derived from type (recommended)
var users = db.GetCollection<User>(); // Collection name: "User"
// Custom name - useful for multiple collections of same type
var activeUsers = db.GetCollection<User>("ActiveUsers");
var archivedUsers = db.GetCollection<User>("ArchivedUsers");
// Untyped collection for dynamic scenarios
var untypedCollection = db.GetUntypedCollection("MyData");Collection Lifecycle
- Collections are created automatically when first accessed
- The underlying SQLite table is created with the proper schema
- Indexes defined via attributes are created on first access
Reserved names: Collection names starting with SoloDB are reserved for internal use and will throw an ArgumentException. For example, "SoloDBUsers" is not allowed, but "MyUsers" or "UsersSoloDB" are fine.
// Check if a collection exists
bool exists = db.CollectionExists<User>();
bool existsByName = db.CollectionExists("User");
// Drop a collection (deletes all data!)
db.DropCollection<User>();
db.DropCollection("ArchivedUsers");CRUD Operations Since v1.0.0
Insert
var users = db.GetCollection<User>();
// Single insert - returns the generated ID
var user = new User { Name = "Alice", Email = "alice@example.com" };
long id = users.Insert(user);
// user.Id is also set to the same value
// Batch insert - much faster for multiple items
var newUsers = new List<User>
{
new User { Name = "Bob", Email = "bob@example.com" },
new User { Name = "Charlie", Email = "charlie@example.com" }
};
IList<long> ids = users.InsertBatch(newUsers);Insert or Replace (Upsert)
When you have unique indexes, you can upsert based on those constraints:
// If a user with this email exists (assuming unique index), replace it
var user = new User { Name = "Alice Updated", Email = "alice@example.com" };
users.InsertOrReplace(user);
// Batch version
users.InsertOrReplaceBatch(manyUsers);Read
// By ID (throws KeyNotFoundException if not found)
User user = users.GetById(1);
// By ID - returns F# Option
var userOption = users.TryGetById(1);
if (userOption.IsSome()) // Note: In C#, add FSharpOption extensions for .IsSome()
{
// Access the value through the Option
}
// By custom ID type
Document doc = documents.GetById<string>("doc-abc-123");
// All documents as a list
var allUsers = users.ToList();Update
// Full document update
var user = users.GetById(1);
user.Name = "New Name";
user.Email = "new@email.com";
users.Update(user); // Replaces entire document
// Replace matching documents
users.ReplaceOne(u => u.Email == "old@example.com", newUserData);
users.ReplaceMany(u => u.Status == "pending", templateUser);Note: Methods ending in One (like ReplaceOne, DeleteOne) affect only one document. If multiple documents match the filter, which one is affected is determined by SQLite's internal ordering and may appear random. Use these methods only when you expect exactly one match, or when you don't care which matching document is affected.
Partial Updates with UpdateMany
For efficient partial updates without loading the full document. This is significantly faster than loading documents with GetById, modifying them, and calling Update, because it executes a single SQL statement instead of multiple round-trips:
// Set a single field
int count = users.UpdateMany(
u => u.Id <= 10, // Filter
u => u.IsActive.Set(true) // Update action
);
// Set multiple fields at once
users.UpdateMany(
u => u.Status == "pending",
u => u.Status.Set("approved"),
u => u.ApprovedAt.Set(DateTime.UtcNow),
u => u.ApprovedBy.Set("admin")
);
// Append to a collection property
users.UpdateMany(
u => u.Id == userId,
u => u.Tags.Append("verified")
);Delete
// By ID - returns count of deleted (0 or 1)
int deleted = users.Delete(1);
// By custom ID
documents.Delete<string>("doc-abc-123");
// By predicate
users.DeleteOne(u => u.Email == "old@example.com"); // First match only
users.DeleteMany(u => u.IsActive == false); // All matchesQuerying with LINQ Since v1.0.0
SoloDB collections implement IQueryable<T>, giving you full LINQ support. Queries are translated to SQL and executed on SQLite.
Filtering
// Where clause
var activeUsers = users.Where(u => u.IsActive).ToList();
// Multiple conditions
var thirtyDaysAgo = DateTime.UtcNow.AddDays(-30);
var results = users.Where(u =>
u.IsActive &&
u.CreatedAt > thirtyDaysAgo &&
u.Email.Contains("@company.com")
).ToList();Single Item Queries
// First match (throws if none)
var first = users.First(u => u.Email == "admin@example.com");
// First or default (returns null if none)
var admin = users.FirstOrDefault(u => u.Role == "Admin");
// Single (throws if not exactly one)
var unique = users.Single(u => u.Username == "johndoe");
// Check existence
bool hasAdmins = users.Any(u => u.Role == "Admin");
bool allActive = users.All(u => u.IsActive);Ordering and Pagination
// Order by
var sorted = users
.OrderBy(u => u.Name)
.ThenByDescending(u => u.CreatedAt)
.ToList();
// Pagination
int page = 2;
int pageSize = 20;
var pageResults = users
.OrderBy(u => u.Id)
.Skip((page - 1) * pageSize)
.Take(pageSize)
.ToList();Projections
// Select specific properties
var emails = users.Select(u => u.Email).ToList();
// Project to anonymous type
var summaries = users.Select(u => new
{
u.Name,
u.Email,
DaysSinceCreated = (DateTime.UtcNow - u.CreatedAt).Days
}).ToList();
// Project to DTO
var dtos = users.Select(u => new UserDto
{
FullName = u.Name,
ContactEmail = u.Email
}).ToList();Aggregates
int totalUsers = users.Count();
int activeCount = users.Count(u => u.IsActive);
long total = users.LongCount();
// Note: Min, Max, Sum, Average are supported on numeric projections
var maxId = users.Max(u => u.Id);String Operations
// Contains, StartsWith, EndsWith
var results = users.Where(u =>
u.Name.Contains("john") ||
u.Email.StartsWith("admin") ||
u.Email.EndsWith("@company.com")
).ToList();
// SQL LIKE pattern (via extension)
var pattern = users.Where(u => u.Name.Like("J%n")).ToList();Performance note: StartsWith can use indexes for faster lookups (translated to >= 'prefix' AND < 'next' comparisons, which SQLite optimizes efficiently). However, EndsWith and Contains cannot use indexes and require a full table scan.
Array/Collection Queries
// Query nested arrays
var tagged = users.Where(u => u.Tags.Contains("premium")).ToList();
// Check if any element matches
var oneWeekAgo = DateTime.UtcNow.AddDays(-7);
var withRecentOrders = users.Where(u =>
u.Orders.Any(o => o.Date > oneWeekAgo)
).ToList();Indexing Since v1.0.0
Indexes dramatically improve query performance for filtered and sorted operations. Without an index, SoloDB must scan every document.
Attribute-Based Indexes
The easiest way - add [Indexed] to properties you frequently query:
using SoloDatabase.Attributes;
public class Product
{
public long Id { get; set; } // Always indexed (primary key)
[Indexed(unique: true)] // Unique index - no duplicates allowed
public string SKU { get; set; }
[Indexed] // Non-unique index
public string Category { get; set; }
[Indexed]
public decimal Price { get; set; }
public string Description { get; set; } // Not indexed
}Indexes are automatically created when the collection is first accessed.
When to Index
- DO index: Properties used in
Whereclauses,OrderBy, and unique constraints - DON'T index: Properties rarely queried, or only used in
Selectprojections - Consider trade-offs: Indexes speed up reads but slow down writes slightly, and increase the database file size on disk/memory
Programmatic Indexes
var products = db.GetCollection<Product>();
// Create a non-unique index
products.EnsureIndex(p => p.Category);
// Create a unique index
products.EnsureUniqueAndIndex(p => p.Email);
// Remove an index
products.DropIndexIfExists(p => p.Category);
// Ensure all attribute-defined indexes exist
products.EnsureAddedAttributeIndexes();Note: If you add new [Indexed] attributes to your model classes after the database already exists, the indexes won't be created automatically until you call EnsureAddedAttributeIndexes(). Indexes are only auto-created on first collection access.
Supported Index Expressions
EnsureIndex and EnsureUniqueAndIndex accept expressions that reference entity properties directly:
// Direct property
products.EnsureIndex(p => p.Category);
// Composite index (tuple)
products.EnsureIndex(p => new { p.Category, p.Price });
// DBRef.Id navigation (indexes the foreign key)
orders.EnsureIndex(o => o.Author.Id);Unsupported Index Expressions
The following expressions are rejected at call time with a descriptive error message:
x => x.Id | Entity Id is always indexed automatically |
x => x.Items.Count | Relation expressions that resolve through link tables cannot be indexed |
x => x.Tag.Value.Name | DBRef.Value navigation requires a JOIN and cannot be indexed |
x => x.Name + "suffix" | Expressions containing variables are not allowed |
x => 42 | Constant expressions must reference the entity parameter |
Unique Constraint Violations
Inserting a duplicate value for a unique index throws SqliteException:
try
{
products.Insert(new Product { SKU = "EXISTING-SKU" });
}
catch (Microsoft.Data.Sqlite.SqliteException ex)
when (ex.Message.Contains("UNIQUE"))
{
Console.WriteLine("SKU already exists!");
}Relations Since v1.1
SoloDB supports typed references between documents using DBRef<T> for single references and DBRefMany<T> for collections. References are stored as foreign keys in link tables and loaded automatically when querying.
Relation writes participate in the same transaction model as document writes: root scopes use BEGIN IMMEDIATE, nested scopes use SAVEPOINT, and inner rollback does not abort the outer transaction.
Single Reference (DBRef) Since v1.1
Use DBRef<T> to reference one document from another. A single entity can have multiple DBRef properties pointing to the same target type:
public class Person
{
public long Id { get; set; }
public string Name { get; set; }
}
public class Loan
{
public long Id { get; set; }
public decimal Amount { get; set; }
// Unlink only — deleting a loan does not delete the referenced persons
[SoloRef(OnOwnerDelete = DeletePolicy.Unlink)]
public DBRef<Person> Borrower { get; set; }
[SoloRef(OnOwnerDelete = DeletePolicy.Unlink)]
public DBRef<Person> Guarantor { get; set; }
}
var people = db.GetCollection<Person>();
var loans = db.GetCollection<Loan>();
var aliceId = people.Insert(new Person { Name = "Alice" });
var bobId = people.Insert(new Person { Name = "Bob" });
// Reference existing persons by Id
loans.Insert(new Loan
{
Amount = 5000,
Borrower = DBRef<Person>.To(aliceId),
Guarantor = DBRef<Person>.To(bobId)
});
// Or cascade-insert a new person automatically
loans.Insert(new Loan
{
Amount = 3000,
Borrower = DBRef<Person>.To(aliceId),
Guarantor = DBRef<Person>.From(new Person { Name = "Charlie" })
});
// Query loans by borrower — referenced entity is loaded automatically
var aliceLoans = loans.Where(l => l.Borrower.Id == aliceId).ToList();
Console.WriteLine(aliceLoans[0].Borrower.Value.Name); // "Alice"
Console.WriteLine(aliceLoans[0].Guarantor.Value.Name); // "Bob"
// Query through the referenced entity's properties
var charlieGuaranteed = loans.Where(l =>
l.Guarantor.Value.Name == "Charlie"
).ToList();
Console.WriteLine(charlieGuaranteed[0].Amount); // 3000HasValue vs IsLoaded
HasValue and IsLoaded are independent properties:
HasValue—trueif a reference exists (Id != 0). This is about whether a target is referenced.IsLoaded—trueif theValuehas been hydrated by the query pipeline. This is about whether the target entity data is available in memory.
DBRef<T>.None | HasValue = false, IsLoaded = false |
DBRef<T>.To(id) | HasValue = true, IsLoaded = false |
| After query loads entity | HasValue = true, IsLoaded = true |
State Transition Examples
Example 1: Before Loading the Target Entity
// DBRef.To(id) sets HasValue=true but IsLoaded=false
var borrowerId = people.Insert(new Person { Name = "Alice" });
var loan = new Loan { Amount = 1000, Borrower = DBRef<Person>.To(borrowerId) };
Console.WriteLine(loan.Borrower.HasValue); // true — reference exists
Console.WriteLine(loan.Borrower.IsLoaded); // false — entity not loaded yet
// loan.Borrower.Value would throw InvalidOperationException hereExample 2: After Query Hydration
var loaded = loans.First(l => l.Amount == 5000);
Console.WriteLine(loaded.Borrower.HasValue); // true
Console.WriteLine(loaded.Borrower.IsLoaded); // true
Console.WriteLine(loaded.Borrower.Value.Name); // "Alice" — safe to accessDBRef Properties
Id | The database Id of the referenced entity (0 if empty) |
HasValue | true if a reference exists (Id != 0) |
IsLoaded | true if the entity was loaded by the query pipeline |
Value | The loaded entity (throws InvalidOperationException if not loaded or empty) |
Factory Methods
DBRef<T>.To(id) | Reference an existing entity by its database Id |
DBRef<T>.From(entity) | Cascade-insert a new entity, then reference it |
DBRef<T>.None | Empty reference (no target) |
Collection Reference (DBRefMany) Since v1.1
Use DBRefMany<T> to reference multiple documents. It implements IList<T> with change tracking — SoloDB computes the diff automatically on Update:
public class Tag
{
public long Id { get; set; }
public string Label { get; set; }
}
public class Article
{
public long Id { get; set; }
public string Title { get; set; }
public DBRefMany<Tag> Tags { get; set; } = new();
}
var tags = db.GetCollection<Tag>();
var articles = db.GetCollection<Article>();
var article = new Article { Title = "Breaking News" };
article.Tags.Add(new Tag { Label = "News" }); // Cascade-insert
articles.Insert(article);
// Load and modify
var loaded = articles.First(a => a.Title == "Breaking News");
loaded.Tags.Add(new Tag { Label = "Tech" }); // Add a new tag
loaded.Tags.RemoveAt(0); // Remove first tag
articles.Update(loaded); // Diff is computed automaticallyDelete Policies Since v1.1
Control what happens when referenced documents are deleted using the [SoloRef] attribute:
public class Order
{
public long Id { get; set; }
// Block deletion of the referenced Customer if orders reference it
[SoloRef(OnDelete = DeletePolicy.Restrict)]
public DBRef<Customer> Customer { get; set; }
// When this order is deleted, unlink items (items survive)
[SoloRef(OnOwnerDelete = DeletePolicy.Unlink)]
public DBRefMany<OrderItem> Items { get; set; } = new();
}OnDelete (target entity is deleted)
Restrict (default) | Block the delete if any references exist |
Cascade | Also delete the entity holding this reference |
Unlink | Remove the reference (set to None / remove link row) |
OnOwnerDelete (owner entity is deleted)
Deletion (default) | Unlink, then delete targets with zero remaining references |
Unlink | Remove link rows only, targets always survive |
Restrict | Block owner deletion if any links exist |
Typed-ID References Since v1.1
If your target entity uses a custom ID type (via [SoloId]), use the two-parameter generic form DBRef<T, TId>. The [SoloId] attribute requires an IIdGenerator<T> implementation that generates unique identifiers:
// Helper that derives a unique code from the country name
public static class UniqueCountryTag
{
public static string Get(Country c) => c.Name.Substring(0, 2).ToUpperInvariant();
}
// User-defined generator implementing IIdGenerator<T>
public class CountryCodeGenerator : IIdGenerator<Country>
{
public object GenerateId(ISoloDBCollection<Country> collection, Country document)
=> UniqueCountryTag.Get(document); // Derive code from Name
public bool IsEmpty(object id) => string.IsNullOrEmpty(id as string);
}
public class Country
{
public long Id { get; set; } // Internal row Id required by DBRef/DBRefMany
[SoloId(typeof(CountryCodeGenerator))]
public string Code { get; set; } // Auto-generated from Name (e.g. "UN", "GE", "JA")
public string Name { get; set; }
}
public class Office
{
public long Id { get; set; }
public string City { get; set; }
public DBRef<Country, string> Country { get; set; }
}To() — Reference by Custom ID (Reuse)
.To(id) resolves to an existing record by its [SoloId] value. No new entity is created:
var countries = db.GetCollection<Country>();
var offices = db.GetCollection<Office>();
// Code auto-generated via UniqueCountryTag.Get
countries.Insert(new Country { Name = "United States" }); // Code → "UN"
countries.Insert(new Country { Name = "Germany" }); // Code → "GE"
// Reference existing country by generated code
offices.Insert(new Office
{
City = "New York",
Country = DBRef<Country, string>.To("UN")
});
// Same country, different office — no duplicate created
offices.Insert(new Office
{
City = "San Francisco",
Country = DBRef<Country, string>.To("UN")
});From() — Cascade Insert (No Reuse)
.From(entity) always cascade-inserts a new entity. If the [SoloId] value already exists, the insert fails with a unique constraint violation:
// This cascade-inserts a new Country — works when "JA" doesn't exist yet
offices.Insert(new Office
{
City = "Tokyo",
Country = DBRef<Country, string>.From(new Country { Name = "Japan" }) // Code → "JA"
});
// This FAILS — "UN" already exists (United States), From() tries to insert a duplicate
// Throws InvalidOperationException (UNIQUE constraint failed)
offices.Insert(new Office
{
City = "Chicago",
Country = DBRef<Country, string>.From(new Country { Name = "United States" })
});Note: Use .To(id) to reference existing targets. Use .From(entity) only when creating new targets. Typed-ID resolution requires a unique index on the [SoloId] property (provided automatically by the attribute). The resolver matches exactly one target row — if no match is found, the operation is rejected.
Querying Relations Since v1.1
Referenced entities are loaded automatically. You can query through relations using LINQ:
// Filter by referenced entity's Id
var aliceLoans = loans.Where(l => l.Borrower.Id == aliceId).ToList();
// Filter by DBRefMany content
var taggedArticles = articles.Where(a =>
a.Tags.Any(t => t.Label == "News")
).ToList();Indexing Relations Since v1.1
You can index DBRef.Id to speed up queries that filter by foreign key. This creates an index on the JSON-extracted reference Id stored in the owner document, which is separate from the link table's internal indexes.
Owner-Side Index (DBRef.Id)
When you write a LINQ query filtering on a DBRef Id, SoloDB translates it to a jsonb_extract filter on the owner table:
// LINQ query:
var aliceLoans = loans.Where(l => l.Borrower.Id == aliceId).ToList();
// Translates to SQL:
// SELECT ... FROM Loan WHERE jsonb_extract(Value, '$.Borrower') = @p0
// Without an index: SQLite scans every row, extracting the JSON field per row.
// With an index: SQLite uses the expression index for direct lookup.
// Create the index:
loans.EnsureIndex(l => l.Borrower.Id);
// Composite index — useful for filtering by reference + another field:
loans.EnsureIndex(l => new { l.Borrower.Id, l.Amount });The query-plan effect is verifiable with SQLite plan extraction (EXPLAIN QUERY PLAN): before creating the index you get a table-scan plan, and after creating it an index-search plan is available (for deterministic verification you can force the index with INDEXED BY).
// Plan before index (owner table scan):
// EXPLAIN QUERY PLAN SELECT ... FROM Loan WHERE jsonb_extract(Value, '$.Borrower') = @p0;
// -> SCAN Loan
loans.EnsureIndex(l => l.Borrower.Id);
// Plan after index (forced owner expression-index lookup):
// EXPLAIN QUERY PLAN SELECT ... FROM Loan INDEXED BY IX_Loan_Borrower
// WHERE jsonb_extract(Value, '$.Borrower') = @p0;
// -> SEARCH Loan USING INDEX IX_Loan_BorrowerTarget-Side Queries
Queries that navigate through DBRef.Value or filter DBRefMany content resolve through JOIN or EXISTS subqueries against the link and target tables. Index the target collection instead:
// This query JOINs through the link table to the Person table:
var loans = loans.Where(l => l.Borrower.Value.Name == "Alice").ToList();
// Index the TARGET: people.EnsureIndex(p => p.Name);
// DBRefMany queries use EXISTS subqueries:
var tagged = articles.Where(a => a.Tags.Any(t => t.Label == "News")).ToList();
// Index the TARGET: tags.EnsureIndex(t => t.Label);What Cannot Be Indexed
SoloDB rejects index expressions that navigate through relations at build time:
EnsureIndex(l => l.Borrower.Value.Name) | Rejected — navigates through DBRef.Value (resolves via JOIN) |
EnsureIndex(a => a.Tags.Count) | Rejected — DBRefMany properties resolve through link tables |
Note: Owner-side DBRef.Id indexes are not redundant with the link table's SourceId/TargetId indexes. They serve different query paths: the owner-side index optimizes filtering on the owner collection's JSON field, while link table indexes optimize JOIN traversals.
Schema Evolution Since v1.1
SoloDB detects relation schema changes when you call GetCollection<T>() and applies safe transitions automatically. Unsupported transitions raise an error with a descriptive message and a fix instruction. All detection and migration runs at collection-open time, not at individual entity access.
Supported Transitions
| Transition | Outcome | Details |
Add DBRef or DBRefMany property | Allowed | Link table auto-created via CREATE TABLE IF NOT EXISTS. Catalog row inserted. |
| Remove relation (no persisted links) | Allowed | Catalog row orphaned, link table empty or absent. No error. |
| Single → Many (widen) | Allowed | Atomic table-recreate within the current transaction. The link table is rebuilt with UNIQUE(SourceId, TargetId) replacing UNIQUE(SourceId). Row count is verified before and after. |
Add typed-id (DBRef<T> → DBRef<T,TId>) | Allowed | No link table change. Requires a unique index on the target's [SoloId] property — raises an error if missing. |
Remove typed-id (DBRef<T,TId> → DBRef<T>) | Allowed | Typed-id resolution disabled. No link table change. |
Change OnDelete policy | Allowed | Catalog metadata updated via upsert. No link table DDL change. New policy enforced at application layer on next deletion. |
Change OnOwnerDelete policy | Allowed | Same catalog metadata update. Exception: DBRefMany with OnOwnerDelete = Cascade is rejected at build time. |
Rejected Transitions
| Transition | Error |
| Remove relation (persisted links exist) | "relation property was removed but persisted link data exists" |
| Many → Single (narrow) | "relation kind narrowing not supported" |
| Change target type | "relation target type changed" |
| Single → Many (duplicate source-target pairs) | "forward migration failed ... duplicate (SourceId, TargetId) pairs" |
Important Notes
- Rename a relation property is equivalent to removing the old property and adding a new one. If the old property has persisted link data, the removal is rejected.
SoloRef.Uniquechanges on an existing link table are not applied retroactively —CREATE TABLE IF NOT EXISTSdoes not alter existing table constraints. The unique flag takes effect only when the link table is first created.- Policy changes update the catalog metadata only. They do not modify link table DDL. Policies are enforced at the application layer by the relation engine, not by SQLite constraints.
// Adding a new relation property — link table created automatically
public class Order
{
public long Id { get; set; }
public DBRef<Customer> Customer { get; set; }
public DBRef<Person> SalesRep { get; set; } // New property — safe to add
}
var orders = db.GetCollection<Order>(); // Link table for SalesRep created here
// Changing a delete policy — takes effect immediately
public class Order
{
public long Id { get; set; }
[SoloRef(OnDelete = DeletePolicy.Cascade)] // Changed from default Restrict
public DBRef<Customer> Customer { get; set; }
}
var orders = db.GetCollection<Order>(); // Catalog updated, new policy activeTransactions Since v1.0.0
For operations that must succeed or fail together, use transactions. If any exception occurs, all changes are automatically rolled back.
Root transaction scopes use SQLite BEGIN IMMEDIATE. Calling WithTransaction inside an active transaction uses a nested SQLite SAVEPOINT.
Basic Transaction
db.WithTransaction(tx =>
{
var accounts = tx.GetCollection<Account>();
var from = accounts.GetById(fromAccountId);
var to = accounts.GetById(toAccountId);
if (from.Balance < amount)
throw new InvalidOperationException("Insufficient funds");
from.Balance -= amount;
to.Balance += amount;
accounts.Update(from);
accounts.Update(to);
});
// If we get here, transaction committed successfullyTransaction with Return Value
var orderId = db.WithTransaction(tx =>
{
var orders = tx.GetCollection<Order>();
var inventory = tx.GetCollection<InventoryItem>();
// Create order and update inventory atomically
var order = new Order { CustomerId = customerId, Total = total };
orders.Insert(order);
foreach (var item in orderItems)
{
var inv = inventory.Single(i => i.ProductId == item.ProductId);
inv.Quantity -= item.Quantity;
inventory.Update(inv);
}
return order.Id;
});Automatic Rollback
try
{
db.WithTransaction(tx =>
{
var users = tx.GetCollection<User>();
users.Insert(new User { Name = "Test" });
// This exception causes automatic rollback
throw new Exception("Something went wrong!");
});
}
catch (Exception)
{
// The user was NOT inserted - transaction rolled back
}Async Transactions
var result = await db.WithTransactionAsync(async tx =>
{
var users = tx.GetCollection<User>();
users.Insert(new User { Name = "Alice" });
return users.Count();
});Nested Transactions Since v1.1
Calling WithTransaction inside an existing transaction creates a nested SQLite SAVEPOINT. If the inner operation fails, only the inner changes are rolled back - the outer transaction continues:
db.WithTransaction(tx =>
{
var users = tx.GetCollection<User>();
users.Insert(new User { Name = "Alice" });
try
{
tx.WithTransaction(inner =>
{
var orders = inner.GetCollection<Order>();
orders.Insert(new Order { Total = -1 });
throw new Exception("Invalid order");
});
}
catch (Exception)
{
// Only the order insert was rolled back
// Alice's insert is still part of the outer transaction
}
users.Insert(new User { Name = "Bob" });
// Both Alice and Bob are committed when the outer transaction completes
});Transactions in Event Handlers
Event handlers execute during active SQL statements. Calling WithTransaction on the event context's ISoloDB throws NotSupportedException at runtime because SQLite cannot open savepoints while a statement is in progress. Use the event context directly instead:
// This compiles but throws NotSupportedException at runtime:
db.Events.OnInserted<Order>((ctx, order) =>
{
ctx.Db.WithTransaction(tx => { ... }); // Throws NotSupportedException
return SoloDBEventsResult.EventHandled;
});
// Instead, use the event context directly:
db.Events.OnInserted<Order>((ctx, order) =>
{
var audit = ctx.Db.GetCollection<AuditLog>();
audit.Insert(new AuditLog { Action = "OrderCreated" });
return SoloDBEventsResult.EventHandled;
});FileSystem in Transactions
FileSystem behavior depends on which context you access it from:
db.FileSystem— single-statement operations (Delete,SetMetadata,DeleteMetadata,SetFileModificationDate,SetFileCreationDate) are atomic via SQLite autocommit.db.FileSystem— multi-step structural operations (MoveFile,MoveReplaceFile,MoveDirectory,CopyFile,CopyDirectory,WriteAt,UploadBulk) wrap their own internal transaction and are fully atomic.db.FileSystem—Uploadcommits chunk-by-chunk to avoid holding a long write lock during large uploads. A failure mid-upload can leave partial data.tx.FileSystem— all operations participate in the enclosing transaction. A rollback undoes uploaded chunks, metadata changes, and structural operations atomically.
FileStorage coordination is SQLite-transaction-native in this release; no process-local path mutex layer is used.
// Transactional upload: full rollback on failure
db.WithTransaction(tx =>
{
var users = tx.GetCollection<User>();
users.Insert(new User { Name = "Alice" });
tx.FileSystem.Upload("/avatars/alice.png", imageStream);
// If anything throws here, both user insert and file upload roll back.
});
// Root upload: each chunk commits independently
// (chosen to avoid long write locks during large uploads)
db.FileSystem.Upload("/public/logo.png", stream);Tip: Use tx.FileSystem when you need atomic file writes with your data transaction. Use db.FileSystem for standalone large uploads where partial failure is acceptable.
Events API Since v1.0.0
SoloDB provides an event system that lets you react to document changes. Register handlers to run custom logic before or after insert, update, and delete operations.
Event Types
Six event types are available, split into "before" and "after" events:
| Before Events | Run before the operation commits. Can cause rollback. |
OnInserting | Before a document is inserted |
OnUpdating | Before a document is updated |
OnDeleting | Before a document is deleted |
| After Events | Run after the operation, but still in the same transaction. |
OnInserted | After a document is inserted |
OnUpdated | After a document is updated |
OnDeleted | After a document is deleted |
Registering Handlers
Collections implement the events interface directly:
var users = db.GetCollection<User>();
// Before-insert handler
users.OnInserting(ctx =>
{
Console.WriteLine($"Inserting user: {ctx.Item.Name}");
return SoloDBEventsResult.EventHandled;
});
// After-insert handler
users.OnInserted(ctx =>
{
Console.WriteLine($"Inserted user with ID: {ctx.Item.Id}");
return SoloDBEventsResult.EventHandled;
});
// Update handler with access to old and new values
users.OnUpdating(ctx =>
{
Console.WriteLine($"Updating from {ctx.OldItem.Name} to {ctx.Item.Name}");
return SoloDBEventsResult.EventHandled;
});Handler Return Values
Handlers must return a SoloDBEventsResult value:
EventHandled | Handler completed successfully. Continue with operation. |
RemoveHandler | Handler completed and should be unregistered (one-time handler). |
// One-time handler that removes itself after first execution
users.OnInserting(ctx =>
{
Console.WriteLine("This runs only once!");
return SoloDBEventsResult.RemoveHandler;
});Handler Context
The context object provides access to the item and a scoped database connection:
users.OnInserting(ctx =>
{
// Access the item being inserted/updated/deleted
var user = ctx.Item;
// Access the collection name
string collectionName = ctx.CollectionName;
// The context itself implements ISoloDB - use it for related operations
var logs = ctx.GetCollection<AuditLog>();
logs.Insert(new AuditLog { Action = "UserCreated", UserId = user.Id });
return SoloDBEventsResult.EventHandled;
});For update events, the context provides both old and new values:
users.OnUpdating(ctx =>
{
var oldUser = ctx.OldItem; // Value before update
var newUser = ctx.Item; // Value after update
if (oldUser.Email != newUser.Email)
{
Console.WriteLine($"Email changed from {oldUser.Email} to {newUser.Email}");
}
return SoloDBEventsResult.EventHandled;
});Important: Always use the ctx parameter for database operations inside handlers. Using an external SoloDB instance will cause a database lock error.
Unregistering Handlers
To remove a handler, keep a reference and call Unregister:
// Keep a reference to the handler
InsertingHandler<User> myHandler = ctx =>
{
Console.WriteLine("Handler running");
return SoloDBEventsResult.EventHandled;
};
// Register it
users.OnInserting(myHandler);
// Later, unregister it
users.Unregister(myHandler);Important: Do not call Unregister from within a handler. This will cause an error. To self-remove a handler, return SoloDBEventsResult.RemoveHandler instead.
Exception Handling and Rollback
All event handlers run inside the same SQLite transaction as the operation. If any handler throws an exception, the entire operation rolls back:
users.OnInserting(ctx =>
{
if (ctx.Item.Email == null)
throw new InvalidOperationException("Email is required");
return SoloDBEventsResult.EventHandled;
});
try
{
users.Insert(new User { Name = "Test", Email = null });
}
catch (SqliteException ex)
{
// Insert was rolled back - user was NOT inserted
Console.WriteLine(ex.Message); // Contains "Email is required"
}Important: After-event handlers (OnInserted, OnUpdated, OnDeleted) also run inside the transaction. If they throw, the operation rolls back even though the database change already happened. This is SQLite trigger behavior and ensures consistency.
Multiple Handlers
You can register multiple handlers for the same event. They execute in registration order. If one throws, subsequent handlers do not run:
users.OnInserting(ctx =>
{
Console.WriteLine("First handler");
return SoloDBEventsResult.EventHandled;
});
users.OnInserting(ctx =>
{
Console.WriteLine("Second handler");
throw new Exception("Oops!"); // Stops here, third won't run
});
users.OnInserting(ctx =>
{
Console.WriteLine("Third handler"); // Never reached
return SoloDBEventsResult.EventHandled;
});Use Cases
- Audit logging - Record who changed what and when
- Validation - Enforce business rules before changes commit
- Cascading updates - Update related documents automatically
- Cache invalidation - Clear caches when data changes
- Notifications - Trigger external notifications on changes
Polymorphic Collections Since v1.0.0
Store different derived types in a single collection and query them by base type or filter by concrete type.
Abstract Base Class
public abstract class Shape
{
public long Id { get; set; }
public string Color { get; set; }
public abstract double CalculateArea();
}
public class Circle : Shape
{
public double Radius { get; set; }
public override double CalculateArea() => Math.PI * Radius * Radius;
}
public class Rectangle : Shape
{
public double Width { get; set; }
public double Height { get; set; }
public override double CalculateArea() => Width * Height;
}Usage
var shapes = db.GetCollection<Shape>();
// Insert different types
shapes.Insert(new Circle { Color = "Red", Radius = 5.0 });
shapes.Insert(new Rectangle { Color = "Blue", Width = 4.0, Height = 6.0 });
// Query all shapes (returns properly typed objects)
var allShapes = shapes.ToList();
// allShapes[0] is Circle, allShapes[1] is Rectangle
// Query by base class properties
var blueShapes = shapes.Where(s => s.Color == "Blue").ToList();
// Filter by concrete type using OfType<T>()
var circles = shapes.OfType<Circle>().ToList();
var largeCircles = shapes.OfType<Circle>()
.Where(c => c.Radius > 3.0)
.ToList();How It Works
SoloDB stores type information in a special $type field in the JSON when the collection is based on an abstract class or interface. This allows correct deserialization back to the original type.
Direct SQL Access Since v1.0.0
For complex queries or operations not covered by LINQ, access SQLite directly. SoloDB provides a Dapper-like micro-ORM API with high-performance object mapping using compiled expression trees.
Borrowing a Connection
// Borrow a connection from the pool
using var conn = db.Connection.Borrow();Dapper-Like Query API
The borrowed connection provides familiar methods similar to Dapper, with automatic parameter binding and result mapping. Add the following using statement to access these extension methods:
using SoloDatabase.SQLiteTools;// Execute non-query commands (CREATE, INSERT, UPDATE, DELETE)
// Returns number of rows affected
conn.Execute("CREATE TABLE IF NOT EXISTS Logs (Id INTEGER PRIMARY KEY, Message TEXT)");
conn.Execute("INSERT INTO Logs (Message) VALUES (@msg)", new { msg = "Hello" });
// Query multiple rows - returns IEnumerable<T>
var logs = conn.Query<LogEntry>("SELECT * FROM Logs WHERE Id > @id", new { id = 100 });
// Query first row (throws if no results)
var count = conn.QueryFirst<int>("SELECT COUNT(*) FROM Logs");
// Query first row or default (returns null/default if no results)
var log = conn.QueryFirstOrDefault<LogEntry>("SELECT * FROM Logs WHERE Id = @id", new { id = 999 });Object Mapping
The query methods automatically map SQL results to your types. For complex types, SoloDB builds and compiles LINQ expression trees on first use, creating optimized mappers that match column names to property/field names:
// Map to a class
public class LogEntry
{
public long Id { get; set; }
public string Message { get; set; }
}
var logs = conn.Query<LogEntry>("SELECT Id, Message FROM Logs");
// Map to anonymous types
var results = conn.Query<dynamic>("SELECT Id, Message FROM Logs");
// Map to primitives
var ids = conn.Query<long>("SELECT Id FROM Logs");Accessing Collection Data
// Documents are stored as JSONB in the 'Value' column
// Use SQLite's json_extract to query specific fields
var rawUsers = conn.Query<dynamic>(
"SELECT Id, json_extract(Value, '$.Name') as Name FROM User WHERE json_extract(Value, '$.IsActive') = 1"
);File Storage Since v1.0.0
SoloDB includes a complete hierarchical file storage system stored directly in the database. Files are split into 16KB chunks, compressed using Snappy, and stored in SQLite. This provides:
- Partial reads - Read only what you need without loading the entire file
- Sparse file support - Write at any offset; unwritten areas don't consume space
- Automatic compression - Snappy compression reduces storage size
- Transactional safety - File operations participate in database transactions
- Metadata support - Attach key-value metadata to files and directories
Accessing the FileSystem
var fs = db.FileSystem;Upload and Download
// Upload from a stream
using (var stream = File.OpenRead("report.pdf"))
{
fs.Upload("/documents/reports/2024-q4.pdf", stream);
}
// Download to a stream
using (var output = File.Create("downloaded.pdf"))
{
fs.Download("/documents/reports/2024-q4.pdf", output);
}
// Check existence and delete
bool exists = fs.Exists("/documents/reports/2024-q4.pdf");
fs.DeleteFileAt("/documents/reports/2024-q4.pdf");Stream-Based Access (Like File.Open)
The OpenOrCreateAt method returns a standard Stream that works just like File.Open(). You can use it with any .NET stream API:
// Compare: System.IO file access
using (var fileStream = File.Open("local.txt", FileMode.OpenOrCreate))
{
fileStream.Write(data, 0, data.Length);
fileStream.Position = 0;
fileStream.Read(buffer, 0, buffer.Length);
}
// SoloDB file access - same API!
using (var fileStream = fs.OpenOrCreateAt("/data/log.txt"))
{
fileStream.Write(data, 0, data.Length);
fileStream.Position = 0;
fileStream.Read(buffer, 0, buffer.Length);
}Works with StreamReader/StreamWriter too:
// Write text
using (var stream = fs.OpenOrCreateAt("/logs/app.log"))
using (var writer = new StreamWriter(stream))
{
writer.WriteLine($"[{DateTime.UtcNow}] Application started");
writer.WriteLine($"[{DateTime.UtcNow}] Processing...");
}
// Read text
using (var stream = fs.OpenOrCreateAt("/logs/app.log"))
using (var reader = new StreamReader(stream))
{
string contents = reader.ReadToEnd();
}Random Access (Partial Reads/Writes)
Unlike document storage, FileSystem supports efficient partial access:
// Write at specific offset (creates sparse file if needed)
byte[] data = GetSomeData();
fs.WriteAt("/data/sparse.bin", 1024 * 1024, data); // Write at 1MB offset
// Read from specific offset - doesn't load entire file
byte[] chunk = fs.ReadAt("/data/sparse.bin", 1024 * 1024, data.Length);
// Sparse files: unwritten areas read as zeros, don't consume storage
fs.WriteAt("/sparse.dat", 10_000_000, new byte[] { 1, 2, 3 }); // 10MB offset
// File is NOT 10MB on disk - only the written chunks are storedFile and Directory Metadata
// Set file metadata (key-value pairs)
fs.SetMetadata("/documents/report.pdf", "Author", "Finance Team");
fs.SetMetadata("/documents/report.pdf", "Department", "Accounting");
// Read file info with metadata
var fileInfo = fs.GetAt("/documents/report.pdf");
Console.WriteLine($"Name: {fileInfo.Name}");
Console.WriteLine($"Size: {fileInfo.Length} bytes");
Console.WriteLine($"Created: {fileInfo.Created}");
Console.WriteLine($"Modified: {fileInfo.Modified}");
Console.WriteLine($"Author: {fileInfo.Metadata["Author"]}");
// Delete specific metadata
fs.DeleteMetadata(fileInfo, "Department");
// Directory metadata works the same way
var dir = fs.GetOrCreateDirAt("/documents/archive");
fs.SetDirectoryMetadata(dir, "RetentionPolicy", "7years");
fs.DeleteDirectoryMetadata(dir, "RetentionPolicy");Directory Operations
// Create directory (creates parent directories automatically)
var dir = fs.GetOrCreateDirAt("/documents/archive/2024");
// Get directory info
var dirInfo = fs.GetDirAt("/documents/archive");
// List files in a directory
var files = fs.ListFilesAt("/documents/reports/");
// List subdirectories
var dirs = fs.ListDirectoriesAt("/documents/");
// Recursive listing (files and directories)
var allEntries = fs.RecursiveListEntriesAt("/documents/");
// Lazy recursive listing (memory efficient for large trees)
var lazyEntries = fs.RecursiveListEntriesAtLazy("/");
// Delete directory (must be empty)
fs.DeleteDirAt("/documents/old");Move and Rename Files
// Move/rename a file (throws IOException if destination exists)
fs.MoveFile("/documents/draft.pdf", "/documents/final.pdf");
// Move to different directory
fs.MoveFile("/inbox/file.txt", "/archive/2024/file.txt");
// Move and replace if exists
fs.MoveReplaceFile("/temp/new.pdf", "/documents/report.pdf");Bulk Upload
For uploading many files efficiently in a single transaction:
var files = new List<BulkFileData>
{
new("/logs/app1.log", Encoding.UTF8.GetBytes("Log data 1"), null, null),
new("/logs/app2.log", Encoding.UTF8.GetBytes("Log data 2"), null, null),
new("/images/logo.png", imageBytes, DateTimeOffset.UtcNow, null)
};
fs.UploadBulk(files); // Single transaction for all filesFile Timestamps
Files and directories track Created and Modified timestamps. The Modified timestamp is automatically updated whenever you write to a file or upload new content:
// Modified is automatically updated on writes
fs.WriteAt("/data/file.bin", 0, data); // Modified = now
fs.Upload("/data/file.bin", stream); // Modified = now
// Manually set timestamps when needed
fs.SetFileCreationDate("/archive/old.txt", DateTimeOffset.UtcNow.AddYears(-1));
fs.SetFileModificationDate("/archive/old.txt", DateTimeOffset.UtcNow.AddDays(-30));
// Read timestamps from file info
var info = fs.GetAt("/archive/old.txt");
Console.WriteLine($"Created: {info.Created}");
Console.WriteLine($"Modified: {info.Modified}");Configuration Since v1.0.0
Database Location
// File-based (persistent)
using var db = new SoloDB("path/to/database.db");
using var db = new SoloDB("./relative/path.db");
using var db = new SoloDB(@"C:\absolute\path.db");
// In-memory (lost when disposed)
using var db = new SoloDB("memory:my-database");
// Shared in-memory (accessible by name within process)
using var db1 = new SoloDB("memory:shared");
using var db2 = new SoloDB("memory:shared"); // Same databaseLong-Running Applications
// Singleton pattern for web apps / services
public static class Database
{
public static SoloDB Instance { get; } = new SoloDB("app.db");
}
// Usage
var users = Database.Instance.GetCollection<User>();Database Maintenance
// Optimize query plans (runs ANALYZE)
db.Optimize();
// Backup to another database
using var backup = new SoloDB("backup.db");
db.BackupTo(backup);
// Vacuum into new file (compacts and defragments)
db.VacuumTo("compacted.db");Note: BackupTo and VacuumTo must use the same storage medium - you cannot backup from a file-based database to an in-memory database or vice versa.
Query Caching
SoloDB caches prepared SQL statements for performance. The internal SoloDBConfiguration type contains a CachingEnabled flag that controls this behavior. You can manage caching through these methods:
// Disable caching (reduces memory, slower repeated queries)
// Sets config.CachingEnabled = false
db.DisableCaching();
// Re-enable caching
// Sets config.CachingEnabled = true
db.EnableCaching();
// Clear the current cache (frees memory, keeps caching enabled)
db.ClearCache();Caching is enabled by default. Disabling it automatically clears any cached commands. This can be useful for memory-constrained environments or when running many unique one-off queries.
Extending LINQ Support
SoloDB's LINQ-to-SQL translator can be extended to handle custom expressions. Two handler lists are available:
using SoloDatabase.QueryTranslator;
// Pre-expression handler: Intercept expressions BEFORE the default translator
// Return true to indicate you've handled the expression
QueryTranslator.preExpressionHandler.Add((queryBuilder, expression) =>
{
// Custom handling for specific expression types
// Return false to let the default translator handle it
return false;
});
// Unknown expression handler: Handle expressions the default translator doesn't support
// Called when no built-in handler matches
QueryTranslator.unknownExpressionHandler.Add((queryBuilder, expression) =>
{
// Handle custom expression types that aren't supported by default
// Throw or return false if you can't handle it
return false;
});Advanced: The preExpressionHandler list runs first for every expression, allowing you to override default behavior. The unknownExpressionHandler list is called only when the built-in translator encounters an unrecognized expression type, providing a fallback mechanism.
Performance Tips Since v1.0.0
1. Use Indexes on Queried Properties
// Without index: Full table scan
var user = users.FirstOrDefault(u => u.Email == "test@example.com");
// With index: Fast lookup
[Indexed]
public string Email { get; set; }2. Use Batch Operations
// Slow: 1000 individual transactions
foreach (var item in items)
collection.Insert(item);
// Fast: Single transaction
collection.InsertBatch(items);3. Use Transactions for Multiple Operations
// Slow: Each update is a separate transaction
foreach (var user in usersToUpdate)
{
user.LastSeen = DateTime.UtcNow;
users.Update(user);
}
// Fast: Single transaction
db.WithTransaction(tx =>
{
var col = tx.GetCollection<User>();
foreach (var user in usersToUpdate)
{
user.LastSeen = DateTime.UtcNow;
col.Update(user);
}
});4. Use Projections for Large Documents
// Slow: Loads entire documents
var names = users.ToList().Select(u => u.Name);
// Fast: Only fetches Name field
var names = users.Select(u => u.Name).ToList();5. Use UpdateMany for Partial Updates
// Slow: Load, modify, save each document
foreach (var user in users.Where(u => u.NeedsUpdate))
{
var u = users.GetById(user.Id);
u.Status = "updated";
users.Update(u);
}
// Fast: Single SQL UPDATE statement
users.UpdateMany(u => u.NeedsUpdate, u => u.Status.Set("updated"));6. Keep Documents Small
SQLite reads the entire JSONB document when accessing any field. Large documents slow down all operations, even simple queries. For large binary data, use the built-in FileSystem API which supports partial reads. Use the Events API to keep files and documents in sync automatically:
// Bad: Storing large data in documents
public class Report
{
public long Id { get; set; }
public string Title { get; set; }
public byte[] PdfContent { get; set; } // Large! Loaded on every access
}
// Good: Store large data in FileSystem, reference by path
public class Report
{
public long Id { get; set; }
public string Title { get; set; }
public string PdfPath { get; set; } // e.g., "/reports/2024/report-123.pdf"
}
// Use Events to sync FileSystem with document lifecycle
db.Events.OnDeleted<Report>((ctx, report) =>
{
ctx.FileSystem.DeleteFileAt(report.PdfPath);
});
// Read only what you need from FileSystem
byte[] chunk = fs.ReadAt(report.PdfPath, offset: 0, length: 1024);Benchmark Results vs LiteDB
SoloDB shows strong performance in common operations:
| Insert 10,000 documents | 29% faster than LiteDB |
| Complex LINQ queries | 95% faster than LiteDB |
| GroupBy operations | 57% faster than LiteDB |
| Memory usage | Up to 99% less allocation |
Source: SoloDB vs LiteDB Benchmark
API Reference Since v1.0.0
SoloDB Class
new SoloDB(string path) | Create/open database at path |
GetCollection<T>() | Get typed collection (name from type) |
GetCollection<T>(string name) | Get typed collection with custom name |
GetUntypedCollection(string name) | Get untyped collection for dynamic use |
CollectionExists<T>() | Check if collection exists |
DropCollection<T>() | Delete collection and all data |
WithTransaction(Action) | Execute in transaction (BEGIN IMMEDIATE) |
WithTransaction<T>(Func) | Execute in transaction with return |
WithTransactionAsync(Func) | Execute async transaction |
WithTransactionAsync<T>(Func) | Execute async transaction with return |
FileSystem | Access file storage API |
Connection | Access connection pool for raw SQL |
Optimize() | Run SQLite ANALYZE |
BackupTo(SoloDB target) | Backup to another database |
VacuumTo(string path) | Compact into new file |
Dispose() | Close database connection |
ISoloDBCollection<T> Interface
Insert(T item) | Insert document, returns ID |
InsertBatch(IEnumerable<T>) | Batch insert, returns IDs |
InsertOrReplace(T item) | Upsert based on unique index |
GetById(long id) | Get by ID (throws if not found) |
GetById<TId>(TId id) | Get by custom ID type |
TryGetById(long id) | Get by ID (returns null/None) |
Update(T item) | Replace entire document |
UpdateMany(filter, transforms) | Partial update matching docs |
ReplaceOne(filter, item) | Replace first match |
ReplaceMany(filter, item) | Replace all matches |
Delete(long id) | Delete by ID |
DeleteOne(filter) | Delete first match |
DeleteMany(filter) | Delete all matches |
EnsureIndex(expression) | Create non-unique index |
EnsureUniqueAndIndex(expression) | Create unique index |
DropIndexIfExists(expression) | Remove index |
EnsureAddedAttributeIndexes() | Create indexes from [Indexed] attributes |
DBRef<T> Struct
DBRef<T>.To(long id) | Create reference to existing entity by Id |
DBRef<T>.From(T entity) | Create reference for cascade-insert |
DBRef<T>.None | Empty reference (no target) |
.Id | Database Id of referenced entity |
.HasValue | true if reference exists |
.Value | Loaded entity (throws if not loaded) |
.IsLoaded | true if entity was loaded by query |
DBRefMany<T> Class
Implements IList<T> — Add, Remove, RemoveAt, Clear, Insert, indexer | |
.Count | Number of items in collection |
.IsLoaded | true if populated from query |
DeletePolicy Enum
Restrict | Block the operation if references exist (default for OnDelete) |
Cascade | Cascade delete to referencing entities (OnDelete only) |
Unlink | Remove references, entities survive |
Deletion | Unlink then delete zero-reference targets (default for OnOwnerDelete) |
ISoloDBCollectionEvents<T> Interface
OnInserting(handler) | Register before-insert handler |
OnInserted(handler) | Register after-insert handler |
OnUpdating(handler) | Register before-update handler |
OnUpdated(handler) | Register after-update handler |
OnDeleting(handler) | Register before-delete handler |
OnDeleted(handler) | Register after-delete handler |
Unregister(handler) | Remove a registered handler |
SoloDBEventsResult Enum
EventHandled | Handler completed, keep handler registered |
RemoveHandler | Handler completed, unregister after this call |
Attributes
[Indexed] | Create non-unique index on property |
[Indexed(unique: true)] | Create unique index on property |
[SoloId(typeof(Generator))] | Mark as ID with custom generator |
[Polymorphic] | Mark a class for polymorphic serialization (stores $type discriminator) |
[SoloRef(OnDelete, OnOwnerDelete)] | Configure relation delete policies on DBRef/DBRefMany properties |