S3-backed Git hosting — use your storage or ours
GitForge stores every Git object and LFS file in S3-compatible storage. Use our managed infrastructure, or connect your own bucket for full data custody. Either way, you get the same Git workflows, file locking, RBAC, and API.
How S3-backed Git hosting works
Stateless server layer
The API server holds no local state. Git objects, LFS files, and pack caches all live in S3-compatible object storage. Scale horizontally without shared filesystems.
Three-tier caching
Redis L1 cache for hot objects, S3 for durable storage, and golden pack cache for full-clone acceleration. Reads are fast; storage is durable.
Encryption at rest
Objects are stored with server-side encryption. On BYO storage, use your own KMS keys — GitForge never holds your encryption keys.
RBAC and audit logging
Path-level access control with glob patterns, role bindings, and full audit event logging — regardless of which storage backend you use.
Supported storage backends
When to bring your own storage
Data residency requirements
Keep Git objects and LFS files in a specific region or jurisdiction. Your bucket, your compliance boundary.
Air-gapped or on-premise deployments
Run MinIO on your own infrastructure. GitForge connects to it the same way it connects to cloud S3.
Cost optimization at scale
Negotiate your own cloud storage rates and use reserved capacity. No middleman markup on storage.
Security and audit requirements
Your encryption keys, your access logs, your bucket policies. GitForge handles Git — you handle storage security.
Frequently Asked Questions
Do I need BYO storage to use GitForge?+
No. GitForge managed storage works for most teams. BYO storage is an option on the Custom plan for organizations that need data custody, specific regions, or their own encryption keys.
Can I migrate from managed storage to BYO later?+
Yes. Contact our team for a guided migration. Objects are transferred to your bucket with no downtime — Git URLs and workflows stay the same.
Does BYO storage affect Git performance?+
No. The three-tier caching layer (Redis, S3, golden packs) works the same regardless of backend. Reads hit cache first; writes go directly to your bucket.
How does GitForge access my bucket?+
You provide S3-compatible credentials (access key, secret key, endpoint, bucket name). GitForge uses these to read and write objects. You control the IAM policy and can revoke access at any time.
What about LFS objects on BYO storage?+
LFS objects are stored in the same S3 bucket as Git objects, in a separate prefix. The LFS batch API, locking, and deduplication all work identically on BYO storage.
Git hosting on your terms
Start with managed storage on the free tier. When you need data custody, connect your own bucket on the Custom plan. Same Git workflows, same API, your infrastructure.