
[{"content":" HashiCorp Vault # HashiCorp Vault can be added as a secret provider to Plakar Control Plane by selecting vault-sm as integration type when adding a new secret provider. You\u0026rsquo;ll then need to provide your vault access token, the vaults server url and any suitable name for it.\nVault\u0026rsquo;s path format # Vault organizes secrets under secret engines. Think of a secret engine as a namespace which sits at the top of every path and tells Vault which backend to look in. When you reference a secret in Plakar Control Plane, you must include the secret engine name in the path.\nThe path format used by Plakar Control Plane is:\n{secret_engine}/{path}#{field} For example, if you have a secret at path production/aws inside the default secret engine, and you want the field access_key, you would enter:\nsecret/production/aws#access_key In our example above, we can remove the data section in the configuration path then append our field in the end, in our case thats #access_key\nUsing Vault secrets in Plakar Control Plane # Once Vault is configured as a secret provider, you can use it in any form field that requires a credential. Switch the field from direct value to secret provider, select your Vault instance from the dropdown, and enter the path to the secret you want to use.\n","date":"24 April 2026","externalUrl":null,"permalink":"/control-plane-docs/infrastructure/secret-providers/vault-sm/","section":"Control Plane Docs","summary":"How to configure and use HashiCorp Vault as a secret provider in Plakar Control Plane.","title":"HashiCorp Vault","type":"control-plane-docs"},{"content":" Secret providers # Plakar Control Plane handles sensitive credentials like tokens, passwords, and more. When configuring a connector, inventory, or any other resource that requires a credential, Plakar Control Plane gives you two options.\nDirect value: stores the credential directly in the Plakar Control Plane database. This is the simplest option and works well for most setups. Secret provider: delegates the credential resolution to an external secret manager. Instead of storing the value itself, Plakar Control Plane stores a path that points to the secret in your secret manager, and resolves it at runtime. Using a secret provider is recommended if your organization already manages credentials centrally, or if you want to avoid storing sensitive values in the database.\nSetting up a secret provider # Before you can use a secret provider, you need to configure one. See the provider-specific instructions for your secret manager:\nAWS Secrets Manager HashiCorp Vault Scaleway Secret Manager GCP Secret Manager ","date":"23 April 2026","externalUrl":null,"permalink":"/control-plane-docs/infrastructure/secret-providers/","section":"Control Plane Docs","summary":"How to manage credentials in Plakar Control Plane using secret providers.","title":"Secret Providers","type":"control-plane-docs"},{"content":" Getting Started Overview Enrollment Billing \u0026amp; Plans Installation Infrastructure Secret Providers ","date":"14 April 2026","externalUrl":null,"permalink":"/control-plane-docs/","section":"Control Plane Docs","summary":"Plakar Control Plane documentation hub, find guides, references, and resources for working with Plakar Control Plane.","title":"Control Plane Docs","type":"control-plane-docs"},{"content":" Getting Started # Overview An introduction to Plakar Control Plane, its core concepts, and how to get started.\nEnrollment How to enroll your Plakar Control Plane instance on first setup.\nBilling \u0026amp; Plans Plakar Control Plane plans and how to manage your license.\nInstallation How to deploy Plakar Control Plane as a virtual appliance on your infrastructure.\n","date":"14 April 2026","externalUrl":null,"permalink":"/control-plane-docs/intro/","section":"Control Plane Docs","summary":"","title":"Getting Started","type":"control-plane-docs"},{"content":" Overview # Plakar Control Plane is a self-hosted backup management system built on top of the open-source Plakar. It brings everything Plakar is good at like deduplication, encryption, independent snapshots, flexible connectors and adds the tooling companies need to manage backups at scale: a full web interface, centralized scheduling, inventory and resource management, and more.\nIt is packaged as a virtual appliance you deploy in your own infrastructure, so your data never leaves your environment.\nWho it\u0026rsquo;s for? # Plakar Control Plane is designed for companies that need reliable, auditable backups across multiple resources and environments, and for the sysadmins and DevOps engineers responsible for keeping that running.\nHow it works? # You deploy Plakar Control Plane as a virtual appliance on AWS, OVHcloud, or your own infrastructure. From there, you connect it to your providers through inventories, configure what gets backed up and where, and set schedules. Everything is managed from the web interface.\nWhen you first deploy, you go through an enrollment step that registers your instance with plakar.io. This retrieves your license and sets up billing reporting. No backup data is ever transferred, only the consumption metrics needed for billing.\nCore concepts # Inventories connect Plakar Control Plane to a provider and expose the list of resources available to back up. Managed inventories sync automatically; self-managed ones let you enter resources manually.\nConnectors are the individual sources, stores, and destinations attached to a resource. A source is what gets backed up, a store is where backups are kept, and a destination is where data gets restored to.\nSecret providers let you store credentials securely in an external manager like AWS Secrets Manager or HashiCorp Vault.\nScheduling defines when backup, restore, sync, and check jobs run. The scheduler handles concurrency automatically.\nWhat\u0026rsquo;s next # Installation Enrollment ","date":"14 April 2026","externalUrl":null,"permalink":"/control-plane-docs/intro/overview/","section":"Control Plane Docs","summary":"An introduction to Plakar Control Plane, its core concepts, and how to get started.","title":"Overview","type":"control-plane-docs"},{"content":"","date":"24 March 2026","externalUrl":null,"permalink":"/posts/","section":"Plakar Blog","summary":"","title":"Plakar Blog","type":"posts"},{"content":"","date":"20 March 2026","externalUrl":null,"permalink":"/integrations/","section":"Plakar Integrations","summary":"","title":"Plakar Integrations","type":"integrations"},{"content":" Command line syntax # Every Plakar invocation follows this pattern:\n$ plakar [OPTIONS] [at REPOSITORY] COMMAND [COMMAND_OPTIONS]... Component Required Description OPTIONS No Global options that apply to all commands (see below) at REPOSITORY No Target repository; defaults to $PLAKAR_REPOSITORY or ~/.plakar if omitted COMMAND Yes The operation to perform (e.g. backup, restore, check) COMMAND_OPTIONS No Options and arguments specific to the command (documented under each command reference) A few examples to make the structure concrete:\n# Simplest form: just a command $ plakar version # Operating on a repository $ plakar at /backup ls # Global option + repository + command + command options $ plakar -time at /backup ls -tag daily-backups Global options # Global options appear before the at clause and apply to every command. Options that come after the command are command-specific and are documented in each command reference page.\nOption Description -concurrency int Limit the number of concurrent operations (default: -1) -config string Configuration directory (default: ~/.config/plakar) -cpu int Limit the number of usable CPU cores -disable-security-check Disable update check -enable-security-check Enable update check -keyfile string Use passphrase from key file when prompted -profile-cpu string Profile CPU usage -profile-mem string Profile memory usage -quiet No output except errors -silent No output at all -stdio Use stdio user interface -time Display command execution time -trace string Display trace logs, comma-separated (all, trace, repository, snapshot, server) Option order matters # Options must appear in the correct position. Global options go before at, command options go after the command.\n# Correct: -tag is a command option for ls $ plakar -time at /backup ls -tag daily-backups # Wrong: -tag is placed before the command, plakar sees it as the command name $ plakar -time at /backup -tag daily-backups ls # command not found: -tag A misplaced option will either be ignored or cause an error. When something doesn\u0026rsquo;t work as expected, check option placement first.\nGetting help # Plakar has built-in help at every level.\n# Show global usage, all options and available commands $ plakar -h $ plakar help # Show the manual page for a specific command $ plakar help \u0026lt;command\u0026gt; The built-in help is always in sync with the version of Plakar you have installed, making it the most reliable reference for available options and commands.\nEnvironment variables # Variable Description PLAKAR_PASSPHRASE Supply the encryption passphrase non-interactively PLAKAR_REPOSITORY Set the default repository path PLAKAR_PASSPHRASE # When creating or opening an encrypted repository, Plakar prompts for a passphrase. Setting PLAKAR_PASSPHRASE provides it automatically, which is useful in scripts, CI pipelines, or any non-interactive context where a terminal prompt isn\u0026rsquo;t available.\nPLAKAR_REPOSITORY # Sets the default repository location so you don\u0026rsquo;t need to specify at REPOSITORY on every command. When omitted and no at clause is provided, Plakar falls back to ~/.plakar.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/command-line-syntax/","section":"Docs","summary":"How Plakar commands are structured, why flag order matters, and how to get help from the CLI.","title":"Command line syntax","type":"docs"},{"content":" How Plakar Works # Plakar is built on top of Kloset, an immutable data store engine designed specifically for backup workloads. Understanding how Plakar processes and stores your data helps you make informed decisions about backup strategies and troubleshoot issues when they arise.\nThis page explains the technical foundation of Plakar without step-by-step instructions. If you\u0026rsquo;re looking for practical guidance, see the Guides section.\nKloset Store # Kloset is the immutable data store engine at the heart of Plakar. It is the library that Plakar uses to store and manage backups.\nThe simplest way to see Kloset is as a \u0026ldquo;storage API\u0026rdquo; that Plakar uses to store backups. It is not a traditional REST API you might be familiar with, but rather a library that exposes a set of functions to store and retrieve data. For example, when making a backup, Plakar will use Kloset to retrieve the content and the metadata to be backed up, chunk it into smaller pieces, compress and encrypt those pieces, regroup them into larger files called \u0026ldquo;packfiles\u0026rdquo;, and finally write those packfiles to a storage backend such as a local filesystem, an object storage service, or a remote server.\nPlakar is a tool built on top of Kloset, which provides a command-line and a web interface to manage your backups, with additional features such as scheduling, activity reporting, and more.\nWithout Plakar, you would have to write your own code to use Kloset. With Plakar, you get an easy-to-use tool to implement a backup strategy be it for your personal laptop or your large scale infrastructure.\nIf you want to dig deeper into Kloset and see all the features it provides, read the Kloset blog post.\nBackup steps # When you run a backup command, Plakar will use the integration you specified to retrieve the content to be backed up.\nFor example, the built-in filesystem integration will scan the directory you specified, and retrieve the content and metadata of the files and directories to be backed up.\nThere are several steps that Plakar (actually, Kloset) will perform to create a backup:\nChunking: The content is split into smaller pieces called \u0026ldquo;chunks\u0026rdquo;. If you attempt to back up a large video file for example, it will be split into smaller chunks to make it easier to store and manage. Deduplication: The chunks are deduplicated, meaning that if the same chunk already exists in the store, it will not be stored again. This is a key feature of Plakar that allows you to save space and time when backing up large files or directories that contain many duplicate files. This is also the reason why you can create multiple snapshots of the same directory without consuming much more space than a single snapshot. You might already understand that that\u0026rsquo;s why chunking is so important: if we didn\u0026rsquo;t chunk the content, then adding a single byte to a file would mean that the whole file would have to be stored again. With chunking, only the chunk containing the changed byte will be stored again, and the rest of the file will remain unchanged. Compression: The chunks are compressed to save space. Encryption: The chunks are encrypted. We call the encrypted chunks \u0026ldquo;blobs\u0026rdquo;. These blobs are sent to the storage backend, which acts as a \u0026ldquo;dumb storage\u0026rdquo;: it does not know anything about the content of the blobs, it just stores them as they are. This is what we call \u0026ldquo;real end-to-end encryption\u0026rdquo;: the storage backend does not have access to the content of the backups, and only you can decrypt them. Independent snapshots # In a Kloset store, each backup is stored as an independent snapshot. This means that you can create multiple snapshots of the same data source without consuming much more space than a single snapshot. Each snapshot contains the content and metadata at the time of the backup, and can be restored independently of other snapshots.\nThese snapshots are not incremental backups, meaning that they do not depend on any other snapshot. You can delete a snapshot without affecting any of the subsequent snapshots, and you can compare the differences between a snapshot and any other snapshot.\nContent Defined Chunking (CDC) # As seen in the Backup steps section, Kloset uses Content Defined Chunking (CDC) to split the content into smaller pieces called \u0026ldquo;chunks\u0026rdquo;.\nTo understand why chunking is important, consider the following: let\u0026rsquo;s say you have a large video file that you want to back up. If you didn\u0026rsquo;t chunk the content, then adding a single byte to the end of the file would mean that the whole file would have to be stored again. This would be very inefficient, especially if you have large files that change frequently.\nNow, let\u0026rsquo;s understand why CDC is important. In our video example, what would happen if we added a single byte to the middle of the file? With a fixed-size chunking algorithm, all the subsequent chunks would be considered as changed, and they would have to be stored again.\nCDC stands for \u0026ldquo;Content Defined Chunking\u0026rdquo;, and it is a technique that uses the content of the file to determine the size of the chunks. This means that if you add a single byte to the middle of a file, only the chunk containing that byte will be considered as changed, and only that chunk will be stored again. The rest of the file will remain unchanged. The \u0026ldquo;single byte change\u0026rdquo; in the middle of the file is obviously an example, and the same applies if you make larger changes to the file, such as adding or removing a few lines of text in a text file, or changing a few pixels in an image file.\nTo get a better understanding of how CDC works and to know more about go-cdc-chunker, the library we open-sourced to implement CDC in Kloset, read the go-cdc-chunkers blog post.\nCompression # Kloset uses compression to save space when storing backups. The compression is applied to the chunks before they are encrypted and stored in the storage backend.\nPlakar currently uses LZ4, a fast compression algorithm that is well suited for backups.\nBacking up encrypted data # When backing up data, you have to make a choice: do you want to backup encrypted data or not?\nIf you choose to backup encrypted data, then you defeat the deduplication and compression features of Kloset. Whenever you change a single byte in an encrypted file, the whole file will be considered as changed, and it will be stored again. This is because the encryption algorithm will produce a completely different output for the same input if even a single byte is changed.\nStill, there might be situations where you want to backup encrypted data, but be aware that you will not benefit from all the optimizations that Kloset provides.\nTamper-evident snapshots # The data stored in Kloset is tamper-evident. It doesn\u0026rsquo;t mean the storage backend is \u0026ldquo;immutable\u0026rdquo; in the sense that it cannot be changed. If you store data on a hard drive, for example, it can be changed by anyone with access to the hard drive, and in any case, data can be lost or tampered due to hardware failures.\nWhen we say that the data is tamper-evident, we mean that Kloset uses cryptographic techniques to ensure that any change to the data will be detected. Each snapshot is signed with a cryptographic hash, and any change to the data will result in a different hash. This means that if someone tries to change the data, you will be able to detect it by checking the hash of the snapshot.\nFrom there, you can decide what to do with the tampered snapshot: should you untrust the whole store and use another copy, or should you just ignore the maybe single tampered item and continue using the store as is? This is up to you, but Kloset will always let you know if something is wrong.\nIntegration # We designed Plakar to be as flexible as possible. Nowadays, you not only want to back up your filesystem, but also your databases, your cloud storage, your remote servers, your SaaS applications, and more. To achieve this, Plakar uses the concept of \u0026ldquo;integrations\u0026rdquo;.\nAn integration provides a storage connector, a source connector, and a destination connector; or a combination of those.\nThese integrations are implemented as plugins, and we made the process of installing and using them as easy as possible. We also provide an easy way to create your own integration if you need to backup a data source that is not supported by Plakar out of the box. For example, the FTP source connector is about 80 lines of code, imports included.\nCheck out the list of available integrations to see what is already available.\nStorage connector # The storage connector is the part of the integration that allows Plakar to host the Kloset store to a specific storage backend. It is responsible for storing the blobs (the encrypted chunks) in the storage backend, and for retrieving them when needed.\nFor example, Plakar has a built-in storage connector for filesystems and S3-compatible object storage services, but it is possible to install integrations to host your Kloset store on Google Drive or Dropbox.\nSource connector # The source connector is the part of the integration that allows Plakar to retrieve the content to be backed up. It is responsible for scanning the data source, retrieving the content and metadata, and passing it to Kloset for processing.\nFor example, Plakar has a built-in source connector for SFTP servers, which allows to backup files from a remote server over SSH, but it is possible to install the integration for Notion to backup your Notion pages, or the integration for Google Photos to make sure your memories are safe.\nDestination connector # The destination connector is the part of the integration that allows Plakar to restore the content from a backup. It is responsible for retrieving the content and metadata from Kloset, and restore it to the target location.\nFor example, Plakar has a built-in destination connector for filesystems, which allows to restore files to a local directory. Similarly, it is possible to install the integration for Google Drive, Dropbox, or any other cloud storage service to restore your backups to the cloud.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/explanations/how-plakar-works/","section":"Docs","summary":"Understand the core architecture and data processing pipeline behind Plakar, including Kloset stores, chunking, deduplication, compression, encryption, and snapshot management","title":"How Plakar Works","type":"docs"},{"content":" How Plakar Works # Plakar is built on top of Kloset, an immutable data store engine designed specifically for backup workloads. Understanding how Plakar processes and stores your data helps you make informed decisions about backup strategies and troubleshoot issues when they arise.\nThis page explains the technical foundation of Plakar without step-by-step instructions. If you\u0026rsquo;re looking for practical guidance, see the Guides section.\nKloset Store # Kloset is the immutable data store engine at the heart of Plakar. It is the library that Plakar uses to store and manage backups.\nThe simplest way to see Kloset is as a \u0026ldquo;storage API\u0026rdquo; that Plakar uses to store backups. It is not a traditional REST API you might be familiar with, but rather a library that exposes a set of functions to store and retrieve data. For example, when making a backup, Plakar will use Kloset to retrieve the content and the metadata to be backed up, chunk it into smaller pieces, compress and encrypt those pieces, regroup them into larger files called \u0026ldquo;packfiles\u0026rdquo;, and finally write those packfiles to a storage backend such as a local filesystem, an object storage service, or a remote server.\nPlakar is a tool built on top of Kloset, which provides a command-line and a web interface to manage your backups, with additional features such as scheduling, activity reporting, and more.\nWithout Plakar, you would have to write your own code to use Kloset. With Plakar, you get an easy-to-use tool to implement a backup strategy be it for your personal laptop or your large scale infrastructure.\nIf you want to dig deeper into Kloset and see all the features it provides, read the Kloset blog post.\nBackup steps # When you run a backup command, Plakar will use the integration you specified to retrieve the content to be backed up.\nFor example, the built-in filesystem integration will scan the directory you specified, and retrieve the content and metadata of the files and directories to be backed up.\nThere are several steps that Plakar (actually, Kloset) will perform to create a backup:\nChunking: The content is split into smaller pieces called \u0026ldquo;chunks\u0026rdquo;. If you attempt to back up a large video file for example, it will be split into smaller chunks to make it easier to store and manage. Deduplication: The chunks are deduplicated, meaning that if the same chunk already exists in the store, it will not be stored again. This is a key feature of Plakar that allows you to save space and time when backing up large files or directories that contain many duplicate files. This is also the reason why you can create multiple snapshots of the same directory without consuming much more space than a single snapshot. You might already understand that that\u0026rsquo;s why chunking is so important: if we didn\u0026rsquo;t chunk the content, then adding a single byte to a file would mean that the whole file would have to be stored again. With chunking, only the chunk containing the changed byte will be stored again, and the rest of the file will remain unchanged. Compression: The chunks are compressed to save space. Encryption: The chunks are encrypted. We call the encrypted chunks \u0026ldquo;blobs\u0026rdquo;. These blobs are sent to the storage backend, which acts as a \u0026ldquo;dumb storage\u0026rdquo;: it does not know anything about the content of the blobs, it just stores them as they are. This is what we call \u0026ldquo;real end-to-end encryption\u0026rdquo;: the storage backend does not have access to the content of the backups, and only you can decrypt them. Independent snapshots # In a Kloset store, each backup is stored as an independent snapshot. This means that you can create multiple snapshots of the same data source without consuming much more space than a single snapshot. Each snapshot contains the content and metadata at the time of the backup, and can be restored independently of other snapshots.\nThese snapshots are not incremental backups, meaning that they do not depend on any other snapshot. You can delete a snapshot without affecting any of the subsequent snapshots, and you can compare the differences between a snapshot and any other snapshot.\nContent Defined Chunking (CDC) # As seen in the Backup steps section, Kloset uses Content Defined Chunking (CDC) to split the content into smaller pieces called \u0026ldquo;chunks\u0026rdquo;.\nTo understand why chunking is important, consider the following: let\u0026rsquo;s say you have a large video file that you want to back up. If you didn\u0026rsquo;t chunk the content, then adding a single byte to the end of the file would mean that the whole file would have to be stored again. This would be very inefficient, especially if you have large files that change frequently.\nNow, let\u0026rsquo;s understand why CDC is important. In our video example, what would happen if we added a single byte to the middle of the file? With a fixed-size chunking algorithm, all the subsequent chunks would be considered as changed, and they would have to be stored again.\nCDC stands for \u0026ldquo;Content Defined Chunking\u0026rdquo;, and it is a technique that uses the content of the file to determine the size of the chunks. This means that if you add a single byte to the middle of a file, only the chunk containing that byte will be considered as changed, and only that chunk will be stored again. The rest of the file will remain unchanged. The \u0026ldquo;single byte change\u0026rdquo; in the middle of the file is obviously an example, and the same applies if you make larger changes to the file, such as adding or removing a few lines of text in a text file, or changing a few pixels in an image file.\nTo get a better understanding of how CDC works and to know more about go-cdc-chunker, the library we open-sourced to implement CDC in Kloset, read the go-cdc-chunkers blog post.\nCompression # Kloset uses compression to save space when storing backups. The compression is applied to the chunks before they are encrypted and stored in the storage backend.\nPlakar currently uses LZ4, a fast compression algorithm that is well suited for backups.\nBacking up encrypted data # When backing up data, you have to make a choice: do you want to backup encrypted data or not?\nIf you choose to backup encrypted data, then you defeat the deduplication and compression features of Kloset. Whenever you change a single byte in an encrypted file, the whole file will be considered as changed, and it will be stored again. This is because the encryption algorithm will produce a completely different output for the same input if even a single byte is changed.\nStill, there might be situations where you want to backup encrypted data, but be aware that you will not benefit from all the optimizations that Kloset provides.\nTamper-evident snapshots # The data stored in Kloset is tamper-evident. It doesn\u0026rsquo;t mean the storage backend is \u0026ldquo;immutable\u0026rdquo; in the sense that it cannot be changed. If you store data on a hard drive, for example, it can be changed by anyone with access to the hard drive, and in any case, data can be lost or tampered due to hardware failures.\nWhen we say that the data is tamper-evident, we mean that Kloset uses cryptographic techniques to ensure that any change to the data will be detected. Each snapshot is signed with a cryptographic hash, and any change to the data will result in a different hash. This means that if someone tries to change the data, you will be able to detect it by checking the hash of the snapshot.\nFrom there, you can decide what to do with the tampered snapshot: should you untrust the whole store and use another copy, or should you just ignore the maybe single tampered item and continue using the store as is? This is up to you, but Kloset will always let you know if something is wrong.\nIntegration # We designed Plakar to be as flexible as possible. Nowadays, you not only want to back up your filesystem, but also your databases, your cloud storage, your remote servers, your SaaS applications, and more. To achieve this, Plakar uses the concept of \u0026ldquo;integrations\u0026rdquo;.\nAn integration provides a storage connector, a source connector, and a destination connector; or a combination of those.\nThese integrations are implemented as plugins, and we made the process of installing and using them as easy as possible. We also provide an easy way to create your own integration if you need to backup a data source that is not supported by Plakar out of the box. For example, the FTP source connector is about 80 lines of code, imports included.\nCheck out the list of available integrations to see what is already available.\nStorage connector # The storage connector is the part of the integration that allows Plakar to host the Kloset store to a specific storage backend. It is responsible for storing the blobs (the encrypted chunks) in the storage backend, and for retrieving them when needed.\nFor example, Plakar has a built-in storage connector for filesystems and S3-compatible object storage services, but it is possible to install integrations to host your Kloset store on Google Drive or Dropbox.\nSource connector # The source connector is the part of the integration that allows Plakar to retrieve the content to be backed up. It is responsible for scanning the data source, retrieving the content and metadata, and passing it to Kloset for processing.\nFor example, Plakar has a built-in source connector for SFTP servers, which allows to backup files from a remote server over SSH, but it is possible to install the integration for Notion to backup your Notion pages, or the integration for Google Photos to make sure your memories are safe.\nDestination connector # The destination connector is the part of the integration that allows Plakar to restore the content from a backup. It is responsible for retrieving the content and metadata from Kloset, and restore it to the target location.\nFor example, Plakar has a built-in destination connector for filesystems, which allows to restore files to a local directory. Similarly, it is possible to install the integration for Google Drive, Dropbox, or any other cloud storage service to restore your backups to the cloud.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/explanations/how-plakar-works/","section":"Docs","summary":"Understand the core architecture and data processing pipeline behind Plakar, including Kloset stores, chunking, deduplication, compression, encryption, and snapshot management","title":"How Plakar Works","type":"docs"},{"content":" How Plakar Works # Plakar is built on top of Kloset, an immutable data store engine designed specifically for backup workloads. Understanding how Plakar processes and stores your data helps you make informed decisions about backup strategies and troubleshoot issues when they arise.\nThis page explains the technical foundation of Plakar without step-by-step instructions. If you\u0026rsquo;re looking for practical guidance, see the Guides section.\nKloset Store # Kloset is the immutable data store engine at the heart of Plakar. It is the library that Plakar uses to store and manage backups.\nThe simplest way to see Kloset is as a \u0026ldquo;storage API\u0026rdquo; that Plakar uses to store backups. It is not a traditional REST API you might be familiar with, but rather a library that exposes a set of functions to store and retrieve data. For example, when making a backup, Plakar will use Kloset to retrieve the content and the metadata to be backed up, chunk it into smaller pieces, compress and encrypt those pieces, regroup them into larger files called \u0026ldquo;packfiles\u0026rdquo;, and finally write those packfiles to a storage backend such as a local filesystem, an object storage service, or a remote server.\nPlakar is a tool built on top of Kloset, which provides a command-line and a web interface to manage your backups, with additional features such as scheduling, activity reporting, and more.\nWithout Plakar, you would have to write your own code to use Kloset. With Plakar, you get an easy-to-use tool to implement a backup strategy be it for your personal laptop or your large scale infrastructure.\nIf you want to dig deeper into Kloset and see all the features it provides, read the Kloset blog post.\nBackup steps # When you run a backup command, Plakar will use the integration you specified to retrieve the content to be backed up.\nFor example, the built-in filesystem integration will scan the directory you specified, and retrieve the content and metadata of the files and directories to be backed up.\nThere are several steps that Plakar (actually, Kloset) will perform to create a backup:\nChunking: The content is split into smaller pieces called \u0026ldquo;chunks\u0026rdquo;. If you attempt to back up a large video file for example, it will be split into smaller chunks to make it easier to store and manage. Deduplication: The chunks are deduplicated, meaning that if the same chunk already exists in the store, it will not be stored again. This is a key feature of Plakar that allows you to save space and time when backing up large files or directories that contain many duplicate files. This is also the reason why you can create multiple snapshots of the same directory without consuming much more space than a single snapshot. You might already understand that that\u0026rsquo;s why chunking is so important: if we didn\u0026rsquo;t chunk the content, then adding a single byte to a file would mean that the whole file would have to be stored again. With chunking, only the chunk containing the changed byte will be stored again, and the rest of the file will remain unchanged. Compression: The chunks are compressed to save space. Encryption: The chunks are encrypted. We call the encrypted chunks \u0026ldquo;blobs\u0026rdquo;. These blobs are sent to the storage backend, which acts as a \u0026ldquo;dumb storage\u0026rdquo;: it does not know anything about the content of the blobs, it just stores them as they are. This is what we call \u0026ldquo;real end-to-end encryption\u0026rdquo;: the storage backend does not have access to the content of the backups, and only you can decrypt them. Independent snapshots # In a Kloset store, each backup is stored as an independent snapshot. This means that you can create multiple snapshots of the same data source without consuming much more space than a single snapshot. Each snapshot contains the content and metadata at the time of the backup, and can be restored independently of other snapshots.\nThese snapshots are not incremental backups, meaning that they do not depend on any other snapshot. You can delete a snapshot without affecting any of the subsequent snapshots, and you can compare the differences between a snapshot and any other snapshot.\nContent Defined Chunking (CDC) # As seen in the Backup steps section, Kloset uses Content Defined Chunking (CDC) to split the content into smaller pieces called \u0026ldquo;chunks\u0026rdquo;.\nTo understand why chunking is important, consider the following: let\u0026rsquo;s say you have a large video file that you want to back up. If you didn\u0026rsquo;t chunk the content, then adding a single byte to the end of the file would mean that the whole file would have to be stored again. This would be very inefficient, especially if you have large files that change frequently.\nNow, let\u0026rsquo;s understand why CDC is important. In our video example, what would happen if we added a single byte to the middle of the file? With a fixed-size chunking algorithm, all the subsequent chunks would be considered as changed, and they would have to be stored again.\nCDC stands for \u0026ldquo;Content Defined Chunking\u0026rdquo;, and it is a technique that uses the content of the file to determine the size of the chunks. This means that if you add a single byte to the middle of a file, only the chunk containing that byte will be considered as changed, and only that chunk will be stored again. The rest of the file will remain unchanged. The \u0026ldquo;single byte change\u0026rdquo; in the middle of the file is obviously an example, and the same applies if you make larger changes to the file, such as adding or removing a few lines of text in a text file, or changing a few pixels in an image file.\nTo get a better understanding of how CDC works and to know more about go-cdc-chunker, the library we open-sourced to implement CDC in Kloset, read the go-cdc-chunkers blog post.\nCompression # Kloset uses compression to save space when storing backups. The compression is applied to the chunks before they are encrypted and stored in the storage backend.\nPlakar currently uses LZ4, a fast compression algorithm that is well suited for backups.\nBacking up encrypted data # When backing up data, you have to make a choice: do you want to backup encrypted data or not?\nIf you choose to backup encrypted data, then you defeat the deduplication and compression features of Kloset. Whenever you change a single byte in an encrypted file, the whole file will be considered as changed, and it will be stored again. This is because the encryption algorithm will produce a completely different output for the same input if even a single byte is changed.\nStill, there might be situations where you want to backup encrypted data, but be aware that you will not benefit from all the optimizations that Kloset provides.\nTamper-evident snapshots # The data stored in Kloset is tamper-evident. It doesn\u0026rsquo;t mean the storage backend is \u0026ldquo;immutable\u0026rdquo; in the sense that it cannot be changed. If you store data on a hard drive, for example, it can be changed by anyone with access to the hard drive, and in any case, data can be lost or tampered due to hardware failures.\nWhen we say that the data is tamper-evident, we mean that Kloset uses cryptographic techniques to ensure that any change to the data will be detected. Each snapshot is signed with a cryptographic hash, and any change to the data will result in a different hash. This means that if someone tries to change the data, you will be able to detect it by checking the hash of the snapshot.\nFrom there, you can decide what to do with the tampered snapshot: should you untrust the whole store and use another copy, or should you just ignore the maybe single tampered item and continue using the store as is? This is up to you, but Kloset will always let you know if something is wrong.\nIntegration # We designed Plakar to be as flexible as possible. Nowadays, you not only want to back up your filesystem, but also your databases, your cloud storage, your remote servers, your SaaS applications, and more. To achieve this, Plakar uses the concept of \u0026ldquo;integrations\u0026rdquo;.\nAn integration provides a storage connector, a source connector, and a destination connector; or a combination of those.\nThese integrations are implemented as plugins, and we made the process of installing and using them as easy as possible. We also provide an easy way to create your own integration if you need to backup a data source that is not supported by Plakar out of the box. For example, the FTP source connector is about 80 lines of code, imports included.\nCheck out the list of available integrations to see what is already available.\nStorage connector # The storage connector is the part of the integration that allows Plakar to host the Kloset store to a specific storage backend. It is responsible for storing the blobs (the encrypted chunks) in the storage backend, and for retrieving them when needed.\nFor example, Plakar has a built-in storage connector for filesystems and S3-compatible object storage services, but it is possible to install integrations to host your Kloset store on Google Drive or Dropbox.\nSource connector # The source connector is the part of the integration that allows Plakar to retrieve the content to be backed up. It is responsible for scanning the data source, retrieving the content and metadata, and passing it to Kloset for processing.\nFor example, Plakar has a built-in source connector for SFTP servers, which allows to backup files from a remote server over SSH, but it is possible to install the integration for Notion to backup your Notion pages, or the integration for Google Photos to make sure your memories are safe.\nDestination connector # The destination connector is the part of the integration that allows Plakar to restore the content from a backup. It is responsible for retrieving the content and metadata from Kloset, and restore it to the target location.\nFor example, Plakar has a built-in destination connector for filesystems, which allows to restore files to a local directory. Similarly, it is possible to install the integration for Google Drive, Dropbox, or any other cloud storage service to restore your backups to the cloud.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/explanations/how-plakar-works/","section":"Docs","summary":"Understand the core architecture and data processing pipeline behind Plakar, including Kloset stores, chunking, deduplication, compression, encryption, and snapshot management","title":"How Plakar Works","type":"docs"},{"content":" How Plakar Works # Plakar is built on top of Kloset, an immutable data store engine designed specifically for backup workloads. Understanding how Plakar processes and stores your data helps you make informed decisions about backup strategies and troubleshoot issues when they arise.\nThis page explains the technical foundation of Plakar without step-by-step instructions. If you\u0026rsquo;re looking for practical guidance, see the Guides section.\nKloset Store # Kloset is the immutable data store engine at the heart of Plakar. It is the library that Plakar uses to store and manage backups.\nThe simplest way to see Kloset is as a \u0026ldquo;storage API\u0026rdquo; that Plakar uses to store backups. It is not a traditional REST API you might be familiar with, but rather a library that exposes a set of functions to store and retrieve data. For example, when making a backup, Plakar will use Kloset to retrieve the content and the metadata to be backed up, chunk it into smaller pieces, compress and encrypt those pieces, regroup them into larger files called \u0026ldquo;packfiles\u0026rdquo;, and finally write those packfiles to a storage backend such as a local filesystem, an object storage service, or a remote server.\nPlakar is a tool built on top of Kloset, which provides a command-line and a web interface to manage your backups, with additional features such as scheduling, activity reporting, and more.\nWithout Plakar, you would have to write your own code to use Kloset. With Plakar, you get an easy-to-use tool to implement a backup strategy be it for your personal laptop or your large scale infrastructure.\nIf you want to dig deeper into Kloset and see all the features it provides, read the Kloset blog post.\nBackup steps # When you run a backup command, Plakar will use the integration you specified to retrieve the content to be backed up.\nFor example, the built-in filesystem integration will scan the directory you specified, and retrieve the content and metadata of the files and directories to be backed up.\nThere are several steps that Plakar (actually, Kloset) will perform to create a backup:\nChunking: The content is split into smaller pieces called \u0026ldquo;chunks\u0026rdquo;. If you attempt to back up a large video file for example, it will be split into smaller chunks to make it easier to store and manage. Deduplication: The chunks are deduplicated, meaning that if the same chunk already exists in the store, it will not be stored again. This is a key feature of Plakar that allows you to save space and time when backing up large files or directories that contain many duplicate files. This is also the reason why you can create multiple snapshots of the same directory without consuming much more space than a single snapshot. You might already understand that that\u0026rsquo;s why chunking is so important: if we didn\u0026rsquo;t chunk the content, then adding a single byte to a file would mean that the whole file would have to be stored again. With chunking, only the chunk containing the changed byte will be stored again, and the rest of the file will remain unchanged. Compression: The chunks are compressed to save space. Encryption: The chunks are encrypted. We call the encrypted chunks \u0026ldquo;blobs\u0026rdquo;. These blobs are sent to the storage backend, which acts as a \u0026ldquo;dumb storage\u0026rdquo;: it does not know anything about the content of the blobs, it just stores them as they are. This is what we call \u0026ldquo;real end-to-end encryption\u0026rdquo;: the storage backend does not have access to the content of the backups, and only you can decrypt them. Independent snapshots # In a Kloset store, each backup is stored as an independent snapshot. This means that you can create multiple snapshots of the same data source without consuming much more space than a single snapshot. Each snapshot contains the content and metadata at the time of the backup, and can be restored independently of other snapshots.\nThese snapshots are not incremental backups, meaning that they do not depend on any other snapshot. You can delete a snapshot without affecting any of the subsequent snapshots, and you can compare the differences between a snapshot and any other snapshot.\nContent Defined Chunking (CDC) # As seen in the Backup steps section, Kloset uses Content Defined Chunking (CDC) to split the content into smaller pieces called \u0026ldquo;chunks\u0026rdquo;.\nTo understand why chunking is important, consider the following: let\u0026rsquo;s say you have a large video file that you want to back up. If you didn\u0026rsquo;t chunk the content, then adding a single byte to the end of the file would mean that the whole file would have to be stored again. This would be very inefficient, especially if you have large files that change frequently.\nNow, let\u0026rsquo;s understand why CDC is important. In our video example, what would happen if we added a single byte to the middle of the file? With a fixed-size chunking algorithm, all the subsequent chunks would be considered as changed, and they would have to be stored again.\nCDC stands for \u0026ldquo;Content Defined Chunking\u0026rdquo;, and it is a technique that uses the content of the file to determine the size of the chunks. This means that if you add a single byte to the middle of a file, only the chunk containing that byte will be considered as changed, and only that chunk will be stored again. The rest of the file will remain unchanged. The \u0026ldquo;single byte change\u0026rdquo; in the middle of the file is obviously an example, and the same applies if you make larger changes to the file, such as adding or removing a few lines of text in a text file, or changing a few pixels in an image file.\nTo get a better understanding of how CDC works and to know more about go-cdc-chunker, the library we open-sourced to implement CDC in Kloset, read the go-cdc-chunkers blog post.\nCompression # Kloset uses compression to save space when storing backups. The compression is applied to the chunks before they are encrypted and stored in the storage backend.\nPlakar currently uses LZ4, a fast compression algorithm that is well suited for backups.\nBacking up encrypted data # When backing up data, you have to make a choice: do you want to backup encrypted data or not?\nIf you choose to backup encrypted data, then you defeat the deduplication and compression features of Kloset. Whenever you change a single byte in an encrypted file, the whole file will be considered as changed, and it will be stored again. This is because the encryption algorithm will produce a completely different output for the same input if even a single byte is changed.\nStill, there might be situations where you want to backup encrypted data, but be aware that you will not benefit from all the optimizations that Kloset provides.\nTamper-evident snapshots # The data stored in Kloset is tamper-evident. It doesn\u0026rsquo;t mean the storage backend is \u0026ldquo;immutable\u0026rdquo; in the sense that it cannot be changed. If you store data on a hard drive, for example, it can be changed by anyone with access to the hard drive, and in any case, data can be lost or tampered due to hardware failures.\nWhen we say that the data is tamper-evident, we mean that Kloset uses cryptographic techniques to ensure that any change to the data will be detected. Each snapshot is signed with a cryptographic hash, and any change to the data will result in a different hash. This means that if someone tries to change the data, you will be able to detect it by checking the hash of the snapshot.\nFrom there, you can decide what to do with the tampered snapshot: should you untrust the whole store and use another copy, or should you just ignore the maybe single tampered item and continue using the store as is? This is up to you, but Kloset will always let you know if something is wrong.\nIntegration # We designed Plakar to be as flexible as possible. Nowadays, you not only want to back up your filesystem, but also your databases, your cloud storage, your remote servers, your SaaS applications, and more. To achieve this, Plakar uses the concept of \u0026ldquo;integrations\u0026rdquo;.\nAn integration provides a storage connector, a source connector, and a destination connector; or a combination of those.\nThese integrations are implemented as plugins, and we made the process of installing and using them as easy as possible. We also provide an easy way to create your own integration if you need to backup a data source that is not supported by Plakar out of the box. For example, the FTP source connector is about 80 lines of code, imports included.\nCheck out the list of available integrations to see what is already available.\nStorage connector # The storage connector is the part of the integration that allows Plakar to host the Kloset store to a specific storage backend. It is responsible for storing the blobs (the encrypted chunks) in the storage backend, and for retrieving them when needed.\nFor example, Plakar has a built-in storage connector for filesystems and S3-compatible object storage services, but it is possible to install integrations to host your Kloset store on Google Drive or Dropbox.\nSource connector # The source connector is the part of the integration that allows Plakar to retrieve the content to be backed up. It is responsible for scanning the data source, retrieving the content and metadata, and passing it to Kloset for processing.\nFor example, Plakar has a built-in source connector for SFTP servers, which allows to backup files from a remote server over SSH, but it is possible to install the integration for Notion to backup your Notion pages, or the integration for Google Photos to make sure your memories are safe.\nDestination connector # The destination connector is the part of the integration that allows Plakar to restore the content from a backup. It is responsible for retrieving the content and metadata from Kloset, and restore it to the target location.\nFor example, Plakar has a built-in destination connector for filesystems, which allows to restore files to a local directory. Similarly, it is possible to install the integration for Google Drive, Dropbox, or any other cloud storage service to restore your backups to the cloud.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/explanations/how-plakar-works/","section":"Docs","summary":"Understand the core architecture and data processing pipeline behind Plakar, including Kloset stores, chunking, deduplication, compression, encryption, and snapshot management","title":"How Plakar Works","type":"docs"},{"content":" Logical backups with pg_dump # The Plakar PostgreSQL integration uses pg_dump and pg_dumpall to produce logical backups which are portable, version-independent SQL representations of your databases. Logical backups can be restored to a different PostgreSQL major version and allow selective restore of individual databases, namespaces, or tables.\nFor a deeper understanding of SQL dumps and PostgreSQL backup strategies, refer to the official PostgreSQL documentation on SQL dumps.\nRequirements # A running PostgreSQL server. A PostgreSQL superuser, or a user with sufficient privileges to run pg_dump and pg_dumpall. The following client tools available in $PATH: pg_dump, pg_dumpall, pg_restore, psql. Managed services (RDS, Cloud SQL, etc.) On managed services where the administrative user is a restricted superuser and cannot read pg_authid, the integration automatically falls back to --no-role-passwords. The dump is otherwise complete, but restored roles will have no password set.\nInstall the package # $ plakar pkg add postgresql What gets stored in a snapshot # Single database backup produces two records:\n/globals.sql — roles and tablespaces from pg_dumpall --globals-only. /\u0026lt;dbname\u0026gt;.dump — the database in pg_dump custom format (-Fc). Full cluster backup produces one record:\n/all.sql — all databases, roles, and tablespaces from pg_dumpall. Both backup types also include a /manifest.json record written before the dump data, containing cluster-level metadata: server version, roles, tablespaces, databases, schemas, and relation details. See Snapshot manifest below.\nBack up a single database # $ plakar source add mypg postgres://postgres:secret@db.example.com/myapp $ plakar at /var/backups backup @mypg Back up all databases # Omit the database name to back up the entire cluster with pg_dumpall:\n$ plakar source add mypg postgres://postgres:secret@db.example.com/ $ plakar at /var/backups backup @mypg Restore a single database # The target database must already exist:\n$ plakar destination add mypgdst postgres://postgres:secret@db.example.com/myapp $ plakar at /var/backups restore -to @mypgdst \u0026lt;snapshot_id\u0026gt; To have Plakar create the database automatically, set create_db=true:\n$ plakar destination add mypgdst postgres://postgres:secret@db.example.com/myapp \\ create_db=true $ plakar at /var/backups restore -to @mypgdst \u0026lt;snapshot_id\u0026gt; Restore all databases # $ plakar destination add mypgdst postgres://postgres:secret@db.example.com/ $ plakar at /var/backups restore -to @mypgdst \u0026lt;snapshot_id\u0026gt; List snapshots # $ plakar at /var/backups ls Source options # Parameter Default Description location — Connection URI: postgres://[user[:password]@]host[:port][/database] host localhost Server hostname. Overrides the URI host. port 5432 Server port. Overrides the URI port. username — PostgreSQL username. Overrides the URI user. password — PostgreSQL password. Overrides the URI password. database — Database to back up. If omitted, all databases are backed up via pg_dumpall. Overrides the URI path. When set, a globals dump (/globals.sql) is also produced automatically. compress false Enable pg_dump compression. Disabled by default so Plakar\u0026rsquo;s own compression and deduplication are not degraded. schema_only false Dump only the schema (no data). Mutually exclusive with data_only. data_only false Dump only the data (no schema). Mutually exclusive with schema_only. pg_bin_dir — Directory containing the PostgreSQL client binaries. When omitted, binaries are resolved via $PATH. Useful when multiple PostgreSQL versions are installed. ssl_mode prefer SSL mode: disable, allow, prefer, require, verify-ca, or verify-full. ssl_cert — Path to the client SSL certificate file (PEM). ssl_key — Path to the client SSL private key file (PEM). ssl_root_cert — Path to the root CA certificate used to verify the server (PEM). Destination options # Parameter Default Description location — Connection URI: postgres://[user[:password]@]host[:port][/database] host localhost Server hostname. Overrides the URI host. port 5432 Server port. Overrides the URI port. username — PostgreSQL username. Overrides the URI user. password — PostgreSQL password. Overrides the URI password. database — Target database name. If omitted, inferred from the dump filename (e.g. myapp.dump → myapp). create_db false When true, passes -C to pg_restore to create the database from the archive metadata. The -d parameter then names only the initial connection database (defaults to postgres). restore_globals false When true, feeds /globals.sql to psql before restoring the database dump. Useful when restoring to a server where source roles do not exist. Not needed for pg_dumpall restores (all.sql). no_owner false Pass --no-owner to pg_restore, skipping ALTER OWNER statements. Useful when roles from the source server do not exist on the target. schema_only false Restore only the schema (no data). Mutually exclusive with data_only. Not applicable to pg_dumpall restores. data_only false Restore only the data (no schema). Mutually exclusive with schema_only. Not applicable to pg_dumpall restores. exit_on_error false Stop on the first restore error. Applies to both pg_restore and psql. pg_bin_dir — Directory containing the PostgreSQL client binaries. When omitted, binaries are resolved via $PATH. ssl_mode prefer SSL mode: disable, allow, prefer, require, verify-ca, or verify-full. ssl_cert — Path to the client SSL certificate file (PEM). ssl_key — Path to the client SSL private key file (PEM). ssl_root_cert — Path to the root CA certificate used to verify the server (PEM). Snapshot manifest # Every snapshot includes a /manifest.json record written before the dump data. It captures the cluster state at the time of backup.\nField Description cluster_config Key server settings: data_directory, timezone, max_connections, wal_level, server_encoding, data_checksums, block_size, wal_block_size, shared_preload_libraries, lc_collate, lc_ctype, archive_mode, archive_command_set (boolean only — the command itself is not stored). roles All PostgreSQL roles with their attributes and role memberships. tablespaces All tablespaces with name, owner, filesystem location, and storage options. databases One entry per database: name, owner, encoding, collation, extensions, schemas, and a relations array covering tables, views, materialized views, sequences, and partitioned tables. Row counts in the manifest are estimates from pg_class and pg_stat_user_tables, not exact values. Metadata collection is best-effort: if a query fails, the affected field is omitted and the backup continues.\nConsiderations # Compression # Do not enable compress=true unless necessary. Plakar deduplicates and compresses data automatically. Pre-compressed dumps produce an incompressible stream that reduces deduplication effectiveness across snapshots.\nKloset store location # The examples above use /var/backups as the Kloset store. Any supported store backend can be used instead. See Create a Kloset store for details.\nSee also # PostgreSQL integration on GitHub ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/guides/postgres/pgdump/","section":"Docs","summary":"Back up PostgreSQL databases using the Plakar PostgreSQL integration and restore them.","title":"Logical backups with pg_dump","type":"docs"},{"content":" Logical backups with pg_dump # The Plakar PostgreSQL integration uses pg_dump and pg_dumpall to produce logical backups which are portable, version-independent SQL representations of your databases. Logical backups can be restored to a different PostgreSQL major version and allow selective restore of individual databases, namespaces, or tables.\nFor a deeper understanding of SQL dumps and PostgreSQL backup strategies, refer to the official PostgreSQL documentation on SQL dumps.\nRequirements # A running PostgreSQL server. A PostgreSQL superuser, or a user with sufficient privileges to run pg_dump and pg_dumpall. The following client tools available in $PATH: pg_dump, pg_dumpall, pg_restore, psql. Managed services (RDS, Cloud SQL, etc.) On managed services where the administrative user is a restricted superuser and cannot read pg_authid, the integration automatically falls back to --no-role-passwords. The dump is otherwise complete, but restored roles will have no password set.\nInstall the package # $ plakar pkg add postgresql What gets stored in a snapshot # Single database backup produces two records:\n/globals.sql — roles and tablespaces from pg_dumpall --globals-only. /\u0026lt;dbname\u0026gt;.dump — the database in pg_dump custom format (-Fc). Full cluster backup produces one record:\n/all.sql — all databases, roles, and tablespaces from pg_dumpall. Both backup types also include a /manifest.json record written before the dump data, containing cluster-level metadata: server version, roles, tablespaces, databases, schemas, and relation details. See Snapshot manifest below.\nBack up a single database # $ plakar source add mypg postgres://postgres:secret@db.example.com/myapp $ plakar at /var/backups backup @mypg Back up all databases # Omit the database name to back up the entire cluster with pg_dumpall:\n$ plakar source add mypg postgres://postgres:secret@db.example.com/ $ plakar at /var/backups backup @mypg Restore a single database # The target database must already exist:\n$ plakar destination add mypgdst postgres://postgres:secret@db.example.com/myapp $ plakar at /var/backups restore -to @mypgdst \u0026lt;snapshot_id\u0026gt; To have Plakar create the database automatically, set create_db=true:\n$ plakar destination add mypgdst postgres://postgres:secret@db.example.com/myapp \\ create_db=true $ plakar at /var/backups restore -to @mypgdst \u0026lt;snapshot_id\u0026gt; Restore all databases # $ plakar destination add mypgdst postgres://postgres:secret@db.example.com/ $ plakar at /var/backups restore -to @mypgdst \u0026lt;snapshot_id\u0026gt; List snapshots # $ plakar at /var/backups ls Source options # Parameter Default Description location — Connection URI: postgres://[user[:password]@]host[:port][/database] host localhost Server hostname. Overrides the URI host. port 5432 Server port. Overrides the URI port. username — PostgreSQL username. Overrides the URI user. password — PostgreSQL password. Overrides the URI password. database — Database to back up. If omitted, all databases are backed up via pg_dumpall. Overrides the URI path. When set, a globals dump (/globals.sql) is also produced automatically. compress false Enable pg_dump compression. Disabled by default so Plakar\u0026rsquo;s own compression and deduplication are not degraded. schema_only false Dump only the schema (no data). Mutually exclusive with data_only. data_only false Dump only the data (no schema). Mutually exclusive with schema_only. pg_bin_dir — Directory containing the PostgreSQL client binaries. When omitted, binaries are resolved via $PATH. Useful when multiple PostgreSQL versions are installed. ssl_mode prefer SSL mode: disable, allow, prefer, require, verify-ca, or verify-full. ssl_cert — Path to the client SSL certificate file (PEM). ssl_key — Path to the client SSL private key file (PEM). ssl_root_cert — Path to the root CA certificate used to verify the server (PEM). Destination options # Parameter Default Description location — Connection URI: postgres://[user[:password]@]host[:port][/database] host localhost Server hostname. Overrides the URI host. port 5432 Server port. Overrides the URI port. username — PostgreSQL username. Overrides the URI user. password — PostgreSQL password. Overrides the URI password. database — Target database name. If omitted, inferred from the dump filename (e.g. myapp.dump → myapp). create_db false When true, passes -C to pg_restore to create the database from the archive metadata. The -d parameter then names only the initial connection database (defaults to postgres). restore_globals false When true, feeds /globals.sql to psql before restoring the database dump. Useful when restoring to a server where source roles do not exist. Not needed for pg_dumpall restores (all.sql). no_owner false Pass --no-owner to pg_restore, skipping ALTER OWNER statements. Useful when roles from the source server do not exist on the target. schema_only false Restore only the schema (no data). Mutually exclusive with data_only. Not applicable to pg_dumpall restores. data_only false Restore only the data (no schema). Mutually exclusive with schema_only. Not applicable to pg_dumpall restores. exit_on_error false Stop on the first restore error. Applies to both pg_restore and psql. pg_bin_dir — Directory containing the PostgreSQL client binaries. When omitted, binaries are resolved via $PATH. ssl_mode prefer SSL mode: disable, allow, prefer, require, verify-ca, or verify-full. ssl_cert — Path to the client SSL certificate file (PEM). ssl_key — Path to the client SSL private key file (PEM). ssl_root_cert — Path to the root CA certificate used to verify the server (PEM). Snapshot manifest # Every snapshot includes a /manifest.json record written before the dump data. It captures the cluster state at the time of backup.\nField Description cluster_config Key server settings: data_directory, timezone, max_connections, wal_level, server_encoding, data_checksums, block_size, wal_block_size, shared_preload_libraries, lc_collate, lc_ctype, archive_mode, archive_command_set (boolean only — the command itself is not stored). roles All PostgreSQL roles with their attributes and role memberships. tablespaces All tablespaces with name, owner, filesystem location, and storage options. databases One entry per database: name, owner, encoding, collation, extensions, schemas, and a relations array covering tables, views, materialized views, sequences, and partitioned tables. Row counts in the manifest are estimates from pg_class and pg_stat_user_tables, not exact values. Metadata collection is best-effort: if a query fails, the affected field is omitted and the backup continues.\nConsiderations # Compression # Do not enable compress=true unless necessary. Plakar deduplicates and compresses data automatically. Pre-compressed dumps produce an incompressible stream that reduces deduplication effectiveness across snapshots.\nKloset store location # The examples above use /var/backups as the Kloset store. Any supported store backend can be used instead. See Create a Kloset store for details.\nSee also # PostgreSQL integration on GitHub ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/postgres/pgdump/","section":"Docs","summary":"Back up PostgreSQL databases using the Plakar PostgreSQL integration and restore them.","title":"Logical backups with pg_dump","type":"docs"},{"content":" Logical backups with SQL dumps # SQL dumps consist of a file containing SQL commands that can be fed back to a PostgreSQL server to recreate a database in the exact state it was in at the time the dump was taken.\nFor a deeper understanding of SQL dumps and PostgreSQL backup strategies, we recommend reading the official PostgreSQL documentation on SQL dumps.\nPrerequisites # Running PostgreSQL server Environment variables: PGHOST, PGPORT, PGUSER, PGPASSWORD Back Up Single Database # $ export PGUSER=xxx $ export PGPORT=5432 $ export PGHOST=xxx $ export PGPASSWORD=xxx pg_dump \u0026lt;dbname\u0026gt; | plakar at /var/backups backup stdin:dump.sql Restore Single Database # $ export PGUSER=xxx $ export PGPORT=5432 $ export PGHOST=xxx $ export PGPASSWORD=xxx $ plakar at /var/backups cat \u0026lt;SNAPSHOT_ID\u0026gt;:dump.sql | psql -X \u0026lt;dbname\u0026gt; List snapshots:\n$ plakar at /var/backups ls Back Up Entire Cluster # Use pg_dumpall to include all databases, roles, and tablespaces:\n$ export PGUSER=xxx $ export PGPORT=5432 $ export PGHOST=xxx $ export PGPASSWORD=xxx $ pg_dumpall | plakar at /var/backups backup stdin:dump.sql Restore Entire Cluster # $ export PGUSER=xxx $ export PGPORT=5432 $ export PGHOST=xxx $ export PGPASSWORD=xxx $ plakar at /var/backups cat \u0026lt;SNAPSHOT_ID\u0026gt;:dump.sql | psql -X Considerations # Compression # Do not compress dumps manually Plakar automatically deduplicates and compresses data Pre-compressed dumps prevent effective deduplication ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/postgres/pgdump/","section":"Docs","summary":"Back up PostgreSQL databases using pg_dump and restore from these backups.","title":"Logical backups with SQL dumps","type":"docs"},{"content":" Plakar Ptar # plakar ptar creates portable .ptar archives from data sources.\nSyntax # $ plakar ptar [options] -o output.ptar [sources] Required Arguments # Argument Description -o path Output file path for the .ptar archive sources At least one: -k location for Kloset Store or filesystem path Options # Option Type Default Description -k location string - Include a Kloset Store (path or alias) -plaintext flag false Disable encryption -overwrite flag false Allow overwriting existing files Source Types # Source Type Syntax Example Filesystem path /path or ./path /home/user/Documents Kloset Store (path) -k /path -k $HOME/backups Kloset Store (alias) -k @alias -k @s3-backups Remote protocols Plugin-dependent sftp://, s3://, ipfs:// Examples # # Single directory $ plakar ptar -o documents.ptar /home/user/Documents # Multiple paths $ plakar ptar -o important.ptar /home/user/Documents /home/user/Pictures # From Kloset Store $ plakar ptar -o backup.ptar -k $HOME/backups # From multiple stores $ plakar ptar -o combined.ptar -k $HOME/backups -k @s3-backups # Mixed sources $ plakar ptar -o comprehensive.ptar -k $HOME/backups /home/user/NewDocs # Plaintext archive $ plakar ptar -plaintext -o unencrypted.ptar -k $HOME/backups # Overwrite existing $ plakar ptar -overwrite -o existing.ptar -k $HOME/backups Environment Variables # Variable Description PLAKAR_PASSPHRASE Passphrase for archive encryption (avoids interactive prompt) Exit Codes # Code Meaning 0 Success 1 Error (file exists without -overwrite, invalid source, etc.) plakar at \u0026hellip; (Ptar Operations) # Access Ptar files as read-only Kloset Stores.\nSyntax # $ plakar at archive.ptar \u0026lt;command\u0026gt; Supported Commands # Command Description ls [snapshot-id] List snapshots or files in a snapshot check Verify cryptographic integrity restore -to destination [snapshot-id] Restore snapshot contents info Display archive metadata plakar at \u0026hellip; ls # List snapshots or snapshot contents.\nSyntax # $ plakar at archive.ptar ls [snapshot-id] Arguments # Argument Required Description snapshot-id No If omitted, lists all snapshots; if provided, lists files in that snapshot Examples # # List all snapshots $ plakar at backup.ptar ls # List files in specific snapshot $ plakar at backup.ptar ls df42124a Output Format # Snapshots:\n\u0026lt;timestamp\u0026gt; \u0026lt;snapshot-id\u0026gt; \u0026lt;size\u0026gt; \u0026lt;duration\u0026gt; \u0026lt;path\u0026gt; Files:\n\u0026lt;timestamp\u0026gt; \u0026lt;permissions\u0026gt; \u0026lt;user\u0026gt; \u0026lt;group\u0026gt; \u0026lt;size\u0026gt; \u0026lt;filename\u0026gt; plakar at \u0026hellip; check # Verify archive integrity.\nSyntax # $ plakar at archive.ptar check Examples # $ plakar at backup.ptar check Output # info: \u0026lt;snapshot-id\u0026gt;: ✓ \u0026lt;path\u0026gt; plakar at \u0026hellip; restore # Restore snapshot contents to filesystem or Kloset Store.\nSyntax # $ plakar at archive.ptar restore -to destination [snapshot-id] Arguments # Argument Required Description -to path Yes Destination directory or Kloset Store alias (e.g., @alias) snapshot-id No Snapshot to restore; defaults to first snapshot if omitted Examples # # Restore to local directory $ plakar at backup.ptar restore -to $HOME/restored-backups df42124a # Restore to configured store $ plakar at backup.ptar restore -to @new-location df42124a # Restore first snapshot (no ID specified) $ plakar at backup.ptar restore -to $HOME/restored plakar at \u0026hellip; info # Display archive metadata.\nSyntax # $ plakar at archive.ptar info Examples # $ plakar at backup.ptar info Passphrase Handling # Interactive Mode # If PLAKAR_PASSPHRASE is not set, prompts appear:\nCreating archive:\nrepository passphrase: repository passphrase (confirm): Accessing archive:\nrepository passphrase: Different source/destination:\nsource repository passphrase: repository passphrase: repository passphrase (confirm): Non-interactive Mode # Set PLAKAR_PASSPHRASE environment variable to avoid prompts:\n$ export PLAKAR_PASSPHRASE=\u0026#34;your-secure-passphrase\u0026#34; $ plakar ptar -o backup.ptar -k $HOME/backups File Format Properties # Property Value Read-only Yes (archives cannot be modified after creation) Self-contained Yes (includes all metadata and data) Portable Yes (single file can be moved/copied freely) Encrypted by default Yes (unless -plaintext specified) Tamper-evident Yes (cryptographic verification via check) Further Reading # For a deeper dive into the philosophy and technical design of the format, check out the following posts on the Plakar blog:\nIt doesn\u0026rsquo;t make sense to wrap modern data in a 1979 format, introducing .ptar Technical deep dive into .ptar: replacing .tgz for petabyte-scale S3 archives Kloset Store \u0026amp; Ptar design documentation ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/references/ptar/","section":"Docs","summary":"Command reference for creating and accessing Ptar archives: syntax, options, and examples for plakar ptar and related commands.","title":"Plakar Ptar","type":"docs"},{"content":" Plakar Ptar # plakar ptar creates portable .ptar archives from data sources.\nSyntax # $ plakar ptar [options] -o output.ptar [sources] Required Arguments # Argument Description -o path Output file path for the .ptar archive sources At least one: -k location for Kloset Store or filesystem path Options # Option Type Default Description -k location string - Include a Kloset Store (path or alias) -plaintext flag false Disable encryption -overwrite flag false Allow overwriting existing files Source Types # Source Type Syntax Example Filesystem path /path or ./path /home/user/Documents Kloset Store (path) -k /path -k $HOME/backups Kloset Store (alias) -k @alias -k @s3-backups Remote protocols Plugin-dependent sftp://, s3://, ipfs:// Examples # # Single directory $ plakar ptar -o documents.ptar /home/user/Documents # Multiple paths $ plakar ptar -o important.ptar /home/user/Documents /home/user/Pictures # From Kloset Store $ plakar ptar -o backup.ptar -k $HOME/backups # From multiple stores $ plakar ptar -o combined.ptar -k $HOME/backups -k @s3-backups # Mixed sources $ plakar ptar -o comprehensive.ptar -k $HOME/backups /home/user/NewDocs # Plaintext archive $ plakar ptar -plaintext -o unencrypted.ptar -k $HOME/backups # Overwrite existing $ plakar ptar -overwrite -o existing.ptar -k $HOME/backups Environment Variables # Variable Description PLAKAR_PASSPHRASE Passphrase for archive encryption (avoids interactive prompt) Exit Codes # Code Meaning 0 Success 1 Error (file exists without -overwrite, invalid source, etc.) plakar at \u0026hellip; (Ptar Operations) # Access Ptar files as read-only Kloset Stores.\nSyntax # $ plakar at archive.ptar \u0026lt;command\u0026gt; Supported Commands # Command Description ls [snapshot-id] List snapshots or files in a snapshot check Verify cryptographic integrity restore -to destination [snapshot-id] Restore snapshot contents info Display archive metadata plakar at \u0026hellip; ls # List snapshots or snapshot contents.\nSyntax # $ plakar at archive.ptar ls [snapshot-id] Arguments # Argument Required Description snapshot-id No If omitted, lists all snapshots; if provided, lists files in that snapshot Examples # # List all snapshots $ plakar at backup.ptar ls # List files in specific snapshot $ plakar at backup.ptar ls df42124a Output Format # Snapshots:\n\u0026lt;timestamp\u0026gt; \u0026lt;snapshot-id\u0026gt; \u0026lt;size\u0026gt; \u0026lt;duration\u0026gt; \u0026lt;path\u0026gt; Files:\n\u0026lt;timestamp\u0026gt; \u0026lt;permissions\u0026gt; \u0026lt;user\u0026gt; \u0026lt;group\u0026gt; \u0026lt;size\u0026gt; \u0026lt;filename\u0026gt; plakar at \u0026hellip; check # Verify archive integrity.\nSyntax # $ plakar at archive.ptar check Examples # $ plakar at backup.ptar check Output # info: \u0026lt;snapshot-id\u0026gt;: ✓ \u0026lt;path\u0026gt; plakar at \u0026hellip; restore # Restore snapshot contents to filesystem or Kloset Store.\nSyntax # $ plakar at archive.ptar restore -to destination [snapshot-id] Arguments # Argument Required Description -to path Yes Destination directory or Kloset Store alias (e.g., @alias) snapshot-id No Snapshot to restore; defaults to first snapshot if omitted Examples # # Restore to local directory $ plakar at backup.ptar restore -to $HOME/restored-backups df42124a # Restore to configured store $ plakar at backup.ptar restore -to @new-location df42124a # Restore first snapshot (no ID specified) $ plakar at backup.ptar restore -to $HOME/restored plakar at \u0026hellip; info # Display archive metadata.\nSyntax # $ plakar at archive.ptar info Examples # $ plakar at backup.ptar info Passphrase Handling # Interactive Mode # If PLAKAR_PASSPHRASE is not set, prompts appear:\nCreating archive:\nrepository passphrase: repository passphrase (confirm): Accessing archive:\nrepository passphrase: Different source/destination:\nsource repository passphrase: repository passphrase: repository passphrase (confirm): Non-interactive Mode # Set PLAKAR_PASSPHRASE environment variable to avoid prompts:\n$ export PLAKAR_PASSPHRASE=\u0026#34;your-secure-passphrase\u0026#34; $ plakar ptar -o backup.ptar -k $HOME/backups File Format Properties # Property Value Read-only Yes (archives cannot be modified after creation) Self-contained Yes (includes all metadata and data) Portable Yes (single file can be moved/copied freely) Encrypted by default Yes (unless -plaintext specified) Tamper-evident Yes (cryptographic verification via check) Further Reading # For a deeper dive into the philosophy and technical design of the format, check out the following posts on the Plakar blog:\nIt doesn\u0026rsquo;t make sense to wrap modern data in a 1979 format, introducing .ptar Technical deep dive into .ptar: replacing .tgz for petabyte-scale S3 archives Kloset Store \u0026amp; Ptar design documentation ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/ptar/","section":"Docs","summary":"Command reference for creating and accessing Ptar archives: syntax, options, and examples for plakar ptar and related commands.","title":"Plakar Ptar","type":"docs"},{"content":" Plakar Ptar # plakar ptar creates portable .ptar archives from data sources.\nSyntax # $ plakar ptar [options] -o output.ptar [sources] Required Arguments # Argument Description -o path Output file path for the .ptar archive sources At least one: -k location for Kloset Store or filesystem path Options # Option Type Default Description -k location string - Include a Kloset Store (path or alias) -plaintext flag false Disable encryption -overwrite flag false Allow overwriting existing files Source Types # Source Type Syntax Example Filesystem path /path or ./path /home/user/Documents Kloset Store (path) -k /path -k $HOME/backups Kloset Store (alias) -k @alias -k @s3-backups Remote protocols Plugin-dependent sftp://, s3://, ipfs:// Examples # # Single directory $ plakar ptar -o documents.ptar /home/user/Documents # Multiple paths $ plakar ptar -o important.ptar /home/user/Documents /home/user/Pictures # From Kloset Store $ plakar ptar -o backup.ptar -k $HOME/backups # From multiple stores $ plakar ptar -o combined.ptar -k $HOME/backups -k @s3-backups # Mixed sources $ plakar ptar -o comprehensive.ptar -k $HOME/backups /home/user/NewDocs # Plaintext archive $ plakar ptar -plaintext -o unencrypted.ptar -k $HOME/backups # Overwrite existing $ plakar ptar -overwrite -o existing.ptar -k $HOME/backups Environment Variables # Variable Description PLAKAR_PASSPHRASE Passphrase for archive encryption (avoids interactive prompt) Exit Codes # Code Meaning 0 Success 1 Error (file exists without -overwrite, invalid source, etc.) plakar at \u0026hellip; (Ptar Operations) # Access Ptar files as read-only Kloset Stores.\nSyntax # $ plakar at archive.ptar \u0026lt;command\u0026gt; Supported Commands # Command Description ls [snapshot-id] List snapshots or files in a snapshot check Verify cryptographic integrity restore -to destination [snapshot-id] Restore snapshot contents info Display archive metadata plakar at \u0026hellip; ls # List snapshots or snapshot contents.\nSyntax # $ plakar at archive.ptar ls [snapshot-id] Arguments # Argument Required Description snapshot-id No If omitted, lists all snapshots; if provided, lists files in that snapshot Examples # # List all snapshots $ plakar at backup.ptar ls # List files in specific snapshot $ plakar at backup.ptar ls df42124a Output Format # Snapshots:\n\u0026lt;timestamp\u0026gt; \u0026lt;snapshot-id\u0026gt; \u0026lt;size\u0026gt; \u0026lt;duration\u0026gt; \u0026lt;path\u0026gt; Files:\n\u0026lt;timestamp\u0026gt; \u0026lt;permissions\u0026gt; \u0026lt;user\u0026gt; \u0026lt;group\u0026gt; \u0026lt;size\u0026gt; \u0026lt;filename\u0026gt; plakar at \u0026hellip; check # Verify archive integrity.\nSyntax # $ plakar at archive.ptar check Examples # $ plakar at backup.ptar check Output # info: \u0026lt;snapshot-id\u0026gt;: ✓ \u0026lt;path\u0026gt; plakar at \u0026hellip; restore # Restore snapshot contents to filesystem or Kloset Store.\nSyntax # $ plakar at archive.ptar restore -to destination [snapshot-id] Arguments # Argument Required Description -to path Yes Destination directory or Kloset Store alias (e.g., @alias) snapshot-id No Snapshot to restore; defaults to first snapshot if omitted Examples # # Restore to local directory $ plakar at backup.ptar restore -to $HOME/restored-backups df42124a # Restore to configured store $ plakar at backup.ptar restore -to @new-location df42124a # Restore first snapshot (no ID specified) $ plakar at backup.ptar restore -to $HOME/restored plakar at \u0026hellip; info # Display archive metadata.\nSyntax # $ plakar at archive.ptar info Examples # $ plakar at backup.ptar info Passphrase Handling # Interactive Mode # If PLAKAR_PASSPHRASE is not set, prompts appear:\nCreating archive:\nrepository passphrase: repository passphrase (confirm): Accessing archive:\nrepository passphrase: Different source/destination:\nsource repository passphrase: repository passphrase: repository passphrase (confirm): Non-interactive Mode # Set PLAKAR_PASSPHRASE environment variable to avoid prompts:\n$ export PLAKAR_PASSPHRASE=\u0026#34;your-secure-passphrase\u0026#34; $ plakar ptar -o backup.ptar -k $HOME/backups File Format Properties # Property Value Read-only Yes (archives cannot be modified after creation) Self-contained Yes (includes all metadata and data) Portable Yes (single file can be moved/copied freely) Encrypted by default Yes (unless -plaintext specified) Tamper-evident Yes (cryptographic verification via check) Further Reading # For a deeper dive into the philosophy and technical design of the format, check out the following posts on the Plakar blog:\nIt doesn\u0026rsquo;t make sense to wrap modern data in a 1979 format, introducing .ptar Technical deep dive into .ptar: replacing .tgz for petabyte-scale S3 archives Kloset Store \u0026amp; Ptar design documentation ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/ptar/","section":"Docs","summary":"Command reference for creating and accessing Ptar archives: syntax, options, and examples for plakar ptar and related commands.","title":"Plakar Ptar","type":"docs"},{"content":" S3 # The S3 integration enables backup and restoration of S3 buckets through S3-compatible APIs. All bucket contents including objects, metadata, and folder hierarchies are captured and stored in a Kloset store with encryption and deduplication.\nThe S3 integration provides three connectors:\nConnector type Description Source connector Back up S3 buckets into a Kloset store. Storage connector Use S3-compatible storage as a Kloset store backend. Destination connector Restore bucket contents from a Kloset store back to S3. Installation # The S3 package can be installed using pre-built binaries or compiled from source.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the S3 package:\n$ plakar pkg add s3 Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build s3 A package archive will be created in the current directory (e.g., s3_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./s3_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nConfiguration # Addressing styles # S3-compatible services use one of two addressing styles for buckets.\nPath-style (default) — the bucket name is part of the URL path:\ns3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; Virtual-hosted-style — the bucket name is part of the hostname. Required by some services that do not support path-style access (such as AWS S3 in certain regions):\ns3://\u0026lt;BUCKET_NAME\u0026gt;.\u0026lt;S3_ENDPOINT\u0026gt; Set virtual_host=true when using virtual-hosted-style addressing.\nConfiguration options # These options apply to all three connectors (source, storage, destination).\nOption Required Description location Yes S3 endpoint and bucket. See Addressing styles above. access_key Yes S3 Access Key ID. secret_access_key Yes S3 Secret Access Key. passphrase No Encryption passphrase. If not set, Plakar will prompt interactively. Source connector only. use_tls No Enable TLS. Recommended for all internet-facing connections. virtual_host No Use virtual-hosted-style addressing. Defaults to false. tls_insecure_no_verify No Skip TLS certificate verification. Defaults to false. See warning below. TLS Certificate Verification Setting tls_insecure_no_verify=true disables TLS certificate verification, leaving your connection open to man-in-the-middle attacks. Only use this in controlled environments with self-signed certificates on trusted networks. Never use it with AWS S3, public cloud storage, or any production data.\nSource connector # The source connector retrieves objects from S3 buckets and stores them in a Kloset store with encryption and deduplication.\nflowchart LR subgraph Source[\"S3 Bucket\"] FS[\"Objects\"] end subgraph Plakar[\"Plakar\"] Connector[\"Retrieve objects viaS3 API\"] Transform[\"Encrypt \u0026 deduplicate\"] Connector --\u003e Transform end Source --\u003e Connector Store[\"Kloset Store\"] Transform --\u003e Store Register the source and run a backup:\n$ plakar source add my-s3-bucket \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true $ plakar at /var/backups backup \u0026#34;@my-s3-bucket\u0026#34; Storage connector # The storage connector uses S3-compatible storage as the backend for a Kloset store. All Plakar data—snapshots, chunks, metadata—is stored as S3 objects.\nflowchart LR subgraph Sources[\"Any Source\"] FS[\"Data\"] end subgraph Plakar[\"Plakar\"] Transform[\"Encrypt \u0026 deduplicate\"] Connector[\"Store viaS3 API\"] Transform --\u003e Connector end Sources --\u003e Transform subgraph Storage[\"S3 Storage\"] Store[\"Kloset Store\"] end Connector --\u003e Store Register the store, initialize it, and run a backup:\n$ plakar store add my-s3-store \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true $ plakar at \u0026#34;@my-s3-store\u0026#34; create $ plakar at \u0026#34;@my-s3-store\u0026#34; backup /var/www Destination connector # Restores objects from a Kloset store back to an S3 bucket.\nflowchart LR Store[\"Kloset Store\"] subgraph Plakar[\"Plakar\"] Transform[\"Decrypt \u0026 reconstruct\"] Connector[\"Restore viaS3 API\"] Transform --\u003e Connector end Store --\u003e Transform subgraph Destination[\"S3 Bucket\"] FS[\"Objects\"] end Connector --\u003e Destination Register the destination and restore a snapshot:\n$ plakar destination add my-s3-restore \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true $ plakar at /var/backups restore -to \u0026#34;@my-s3-restore\u0026#34; \u0026lt;snapshot_id\u0026gt; ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/integrations/s3/","section":"Docs","summary":"Back up and restore S3 buckets with Plakar.","title":"S3","type":"docs"},{"content":" S3 # The S3 integration enables backup and restoration of S3 buckets through S3-compatible APIs. All bucket contents including objects, metadata, and folder hierarchies are captured and stored in a Kloset store with encryption and deduplication.\nThe S3 integration provides three connectors:\nConnector type Description Source connector Back up S3 buckets into a Kloset store. Storage connector Use S3-compatible storage as a Kloset store backend. Destination connector Restore bucket contents from a Kloset store back to S3. Installation # The S3 package can be installed using pre-built binaries or compiled from source.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the S3 package:\n$ plakar pkg add s3 Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build s3 A package archive will be created in the current directory (e.g., s3_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./s3_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nConfiguration # Configuration options # These options apply to all three connectors (source, storage, destination).\nOption Required Description location Yes S3 endpoint and bucket including region (format: s3://s3.region.amazonaws.com/bucket) access_key Yes S3 Access Key ID. secret_access_key Yes S3 Secret Access Key. passphrase No Encryption passphrase. If not set, Plakar will prompt interactively. Source connector only. use_tls No Enable TLS. Recommended for all internet-facing connections. Source connector # The source connector retrieves objects from S3 buckets and stores them in a Kloset store with encryption and deduplication.\nflowchart LR subgraph Source[\"S3 Bucket\"] FS[\"Objects\"] end subgraph Plakar[\"Plakar\"] Connector[\"Retrieve objects viaS3 API\"] Transform[\"Encrypt \u0026 deduplicate\"] Connector --\u003e Transform end Source --\u003e Connector Store[\"Kloset Store\"] Transform --\u003e Store Register the source and run a backup:\n$ plakar source add my-s3-bucket \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true $ plakar at /var/backups backup \u0026#34;@my-s3-bucket\u0026#34; Storage connector # The storage connector uses S3-compatible storage as the backend for a Kloset store. All Plakar data—snapshots, chunks, metadata—is stored as S3 objects.\nflowchart LR subgraph Sources[\"Any Source\"] FS[\"Data\"] end subgraph Plakar[\"Plakar\"] Transform[\"Encrypt \u0026 deduplicate\"] Connector[\"Store viaS3 API\"] Transform --\u003e Connector end Sources --\u003e Transform subgraph Storage[\"S3 Storage\"] Store[\"Kloset Store\"] end Connector --\u003e Store Register the store, initialize it, and run a backup:\n$ plakar store add my-s3-store \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true $ plakar at \u0026#34;@my-s3-store\u0026#34; create $ plakar at \u0026#34;@my-s3-store\u0026#34; backup /var/www Destination connector # Restores objects from a Kloset store back to an S3 bucket.\nflowchart LR Store[\"Kloset Store\"] subgraph Plakar[\"Plakar\"] Transform[\"Decrypt \u0026 reconstruct\"] Connector[\"Restore viaS3 API\"] Transform --\u003e Connector end Store --\u003e Transform subgraph Destination[\"S3 Bucket\"] FS[\"Objects\"] end Connector --\u003e Destination Register the destination and restore a snapshot:\n$ plakar destination add my-s3-restore \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true $ plakar at /var/backups restore -to \u0026#34;@my-s3-restore\u0026#34; \u0026lt;snapshot_id\u0026gt; ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/integrations/s3/","section":"Docs","summary":"Back up and restore S3 buckets with Plakar.","title":"S3","type":"docs"},{"content":" S3 # The S3 integration enables backup and restoration of S3 buckets through S3-compatible APIs. All bucket contents including objects, metadata, and folder hierarchies are captured and stored in a Kloset store with encryption and deduplication.\nThe S3 integration provides three connectors:\nConnector type Description Source connector Back up S3 buckets into a Kloset store. Storage connector Use S3-compatible storage as a Kloset store backend. Destination connector Restore bucket contents from a Kloset store back to S3. Installation # The S3 package can be installed using pre-built binaries or compiled from source.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the S3 package:\n$ plakar pkg add s3 Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build s3 A package archive will be created in the current directory (e.g., s3_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./s3_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nConfiguration # Configuration options # These options apply to all three connectors (source, storage, destination).\nOption Required Description location Yes S3 endpoint and bucket including region (format: s3://s3.region.amazonaws.com/bucket) access_key Yes S3 Access Key ID. secret_access_key Yes S3 Secret Access Key. passphrase No Encryption passphrase. If not set, Plakar will prompt interactively. Source connector only. use_tls No Enable TLS. Recommended for all internet-facing connections. Source connector # The source connector retrieves objects from S3 buckets and stores them in a Kloset store with encryption and deduplication.\nflowchart LR subgraph Source[\"S3 Bucket\"] FS[\"Objects\"] end subgraph Plakar[\"Plakar\"] Connector[\"Retrieve objects viaS3 API\"] Transform[\"Encrypt \u0026 deduplicate\"] Connector --\u003e Transform end Source --\u003e Connector Store[\"Kloset Store\"] Transform --\u003e Store Register the source and run a backup:\n$ plakar source add my-s3-bucket \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true $ plakar at /var/backups backup \u0026#34;@my-s3-bucket\u0026#34; Storage connector # The storage connector uses S3-compatible storage as the backend for a Kloset store. All Plakar data—snapshots, chunks, metadata—is stored as S3 objects.\nflowchart LR subgraph Sources[\"Any Source\"] FS[\"Data\"] end subgraph Plakar[\"Plakar\"] Transform[\"Encrypt \u0026 deduplicate\"] Connector[\"Store viaS3 API\"] Transform --\u003e Connector end Sources --\u003e Transform subgraph Storage[\"S3 Storage\"] Store[\"Kloset Store\"] end Connector --\u003e Store Register the store, initialize it, and run a backup:\n$ plakar store add my-s3-store \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true $ plakar at \u0026#34;@my-s3-store\u0026#34; create $ plakar at \u0026#34;@my-s3-store\u0026#34; backup /var/www Destination connector # Restores objects from a Kloset store back to an S3 bucket.\nflowchart LR Store[\"Kloset Store\"] subgraph Plakar[\"Plakar\"] Transform[\"Decrypt \u0026 reconstruct\"] Connector[\"Restore viaS3 API\"] Transform --\u003e Connector end Store --\u003e Transform subgraph Destination[\"S3 Bucket\"] FS[\"Objects\"] end Connector --\u003e Destination Register the destination and restore a snapshot:\n$ plakar destination add my-s3-restore \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true $ plakar at /var/backups restore -to \u0026#34;@my-s3-restore\u0026#34; \u0026lt;snapshot_id\u0026gt; ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/integrations/s3/","section":"Docs","summary":"Back up and restore S3 buckets with Plakar.","title":"S3","type":"docs"},{"content":" S3 # The S3 integration enables backup and restoration of S3 buckets through S3-compatible APIs. All bucket contents including objects, metadata, and folder hierarchies are captured and stored in a Kloset store with encryption and deduplication.\nThe S3 integration provides three connectors:\nConnector type Description Source connector Back up S3 buckets into a Kloset store. Storage connector Use S3-compatible storage as a Kloset store backend. Destination connector Restore bucket contents from a Kloset store back to S3. Installation # The S3 package can be installed using pre-built binaries or compiled from source.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the S3 package:\n$ plakar pkg add s3 Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build s3 A package archive will be created in the current directory (e.g., s3_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./s3_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nConfiguration # Addressing styles # S3-compatible services use one of two addressing styles for buckets.\nPath-style (default) — the bucket name is part of the URL path:\ns3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; Virtual-hosted-style — the bucket name is part of the hostname. Required by some services that do not support path-style access (such as AWS S3 in certain regions):\ns3://\u0026lt;BUCKET_NAME\u0026gt;.\u0026lt;S3_ENDPOINT\u0026gt; Set virtual_host=true when using virtual-hosted-style addressing.\nConfiguration options # These options apply to all three connectors (source, storage, destination).\nOption Required Description location Yes S3 endpoint and bucket. See Addressing styles above. access_key Yes S3 Access Key ID. secret_access_key Yes S3 Secret Access Key. passphrase No Encryption passphrase. If not set, Plakar will prompt interactively. Source connector only. use_tls No Enable TLS. Recommended for all internet-facing connections. virtual_host No Use virtual-hosted-style addressing. Defaults to false. tls_insecure_no_verify No Skip TLS certificate verification. Defaults to false. See warning below. TLS Certificate Verification Setting tls_insecure_no_verify=true disables TLS certificate verification, leaving your connection open to man-in-the-middle attacks. Only use this in controlled environments with self-signed certificates on trusted networks. Never use it with AWS S3, public cloud storage, or any production data.\nSource connector # The source connector retrieves objects from S3 buckets and stores them in a Kloset store with encryption and deduplication.\nflowchart LR subgraph Source[\"S3 Bucket\"] FS[\"Objects\"] end subgraph Plakar[\"Plakar\"] Connector[\"Retrieve objects viaS3 API\"] Transform[\"Encrypt \u0026 deduplicate\"] Connector --\u003e Transform end Source --\u003e Connector Store[\"Kloset Store\"] Transform --\u003e Store Register the source and run a backup:\n$ plakar source add my-s3-bucket \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true $ plakar at /var/backups backup \u0026#34;@my-s3-bucket\u0026#34; Storage connector # The storage connector uses S3-compatible storage as the backend for a Kloset store. All Plakar data—snapshots, chunks, metadata—is stored as S3 objects.\nflowchart LR subgraph Sources[\"Any Source\"] FS[\"Data\"] end subgraph Plakar[\"Plakar\"] Transform[\"Encrypt \u0026 deduplicate\"] Connector[\"Store viaS3 API\"] Transform --\u003e Connector end Sources --\u003e Transform subgraph Storage[\"S3 Storage\"] Store[\"Kloset Store\"] end Connector --\u003e Store Register the store, initialize it, and run a backup:\n$ plakar store add my-s3-store \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true $ plakar at \u0026#34;@my-s3-store\u0026#34; create $ plakar at \u0026#34;@my-s3-store\u0026#34; backup /var/www Destination connector # Restores objects from a Kloset store back to an S3 bucket.\nflowchart LR Store[\"Kloset Store\"] subgraph Plakar[\"Plakar\"] Transform[\"Decrypt \u0026 reconstruct\"] Connector[\"Restore viaS3 API\"] Transform --\u003e Connector end Store --\u003e Transform subgraph Destination[\"S3 Bucket\"] FS[\"Objects\"] end Connector --\u003e Destination Register the destination and restore a snapshot:\n$ plakar destination add my-s3-restore \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true $ plakar at /var/backups restore -to \u0026#34;@my-s3-restore\u0026#34; \u0026lt;snapshot_id\u0026gt; ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/integrations/s3/","section":"Docs","summary":"Back up and restore S3 buckets with Plakar.","title":"S3","type":"docs"},{"content":" Using Exoscale Compute as a Dedicated Backup Server # This guide configures an Exoscale Compute instance to automatically back up your servers to Exoscale Object Storage (SOS). The setup uses Plakar to create encrypted, deduplicated snapshots on a scheduled interval with web UI monitoring.\nArchitecture # Backup Compute: Runs Plakar and schedules backups Source servers: Exoscale servers to back up Exoscale Object Storage (SOS): Stores encrypted backups flowchart TB subgraph Sources[\"Source Servers\"] Server1[\"Web Server 1\"] Server2[\"Web Server 2\"] ServerN[\"Server N\"] end BackupCompute[\"Backup ComputePlakar + Scheduler\"] subgraph Storage[\"Exoscale Object Storage\"] Kloset[\"Kloset StoreEncrypted \u0026 DeduplicatedBackup\"] end Server1 --\u003e|SSH/SFTP| BackupCompute Server2 --\u003e|SSH/SFTP| BackupCompute ServerN --\u003e|SSH/SFTP| BackupCompute BackupCompute --\u003e|Store Snapshots| Kloset Prerequisites # Exoscale account with billing configured SSH keypair for instance access SSH access to source servers Basic familiarity with Plakar commands Create Object Storage Bucket # Create bucket in Exoscale Portal # In the Exoscale portal, navigate to Storage Click Add to create a new bucket Configure: Zone: Select region (note the name, it\u0026rsquo;ll be used to connect to the container e.g ch-dk-2) Name: plakar-backups (must be globally unique) Click Add Generate IAM API Keys # In the Exoscale portal, navigate to IAM → Keys Click on Add to create new API keys, then provide a name and role, then click Create. Copy the key and secret to a secure environment (you cannot see the secret once you leave the page) Create SSH Keypair # Generate SSH key locally and copy the public key: $ ssh-keygen -t ed25519 -f ~/.ssh/id_exoscale -C \u0026#34;exoscale-backup\u0026#34; $ cat ~/.ssh/id_exoscale.pub In the Exoscale portal, navigate to Compute → SSH Keys Click on Add then enter a name for the SSH Key and paste in the public key then click Import. Provision Backup Compute Instance # Create compute instance # In the Exoscale Portal, navigate to Compute → Instances Click Add Configure: Name: plakar-backup Template: Ubuntu 24.04 LTS Zone: Same as Object Storage bucket (recommended for better performance) Instance Type: Small (2 vCPUs, 2 GB RAM) or any other you prefer SSH Key: Select the SSH key we created before from the dropdown Click Add to provision your compute Setup security group rules # Once the compute is provisioned, navigate to Compute → Security Groups By default the compute will be assigned the default security group, click on the actions on default then click on Details On the next page click on Add Rule Configure: Flow direction: Ingress Protocol: TCP Source Type: CIDR Sources: 0.0.0.0/0 allows SSH from anywhere (for better security use your IP Address here) Start \u0026amp; End port: 22 A description for the rule e.g. SSH Access Click on Create Initial connection # Once instance is running, note the public IP and connect:\n$ ssh ubuntu@\u0026lt;instance-ip\u0026gt; Install Plakar # Install Plakar on the instance using the Plakar Installation Guide\nConfigure Object Storage # Install S3 integration # $ plakar login -email you@example.com $ plakar pkg add s3 Add storage connector # $ plakar store add exoscale-sos-backups \\ location=s3://\u0026lt;SOS_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_KEY\u0026gt; \\ use_tls=true \\ passphrase=\u0026#39;\u0026lt;YOUR_SECURE_PASSPHRASE\u0026gt;\u0026#39; Replace:\n\u0026lt;SOS_ENDPOINT\u0026gt;: using the format sos-{zone}.exo.io where zone is one we selected for the bucket e.g sos-ch-dk-2.exo.io \u0026lt;BUCKET_NAME\u0026gt;: e.g., plakar-backups \u0026lt;YOUR_ACCESS_KEY\u0026gt; and \u0026lt;YOUR_SECRET_KEY\u0026gt;: From bucket API credentials \u0026lt;YOUR_SECURE_PASSPHRASE\u0026gt;: Strong passphrase for encryption Passphrase Configuring the passphrase in the store enables automated backups without prompts.\nInitialize Kloset Store # $ plakar at \u0026#34;@exoscale-sos-backups\u0026#34; create Configure SSH Access to Source Servers # Install SFTP integration # $ plakar pkg add sftp Generate SSH keys for backups # $ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_plakar -C \u0026#34;plakar@backup\u0026#34; Press Enter for no passphrase.\nCopy keys to source servers # $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub user@source-server-1 $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub user@source-server-2 Test access:\n$ ssh -i ~/.ssh/id_ed25519_plakar user@source-server-1 \u0026#39;echo \u0026#34;Success\u0026#34;\u0026#39; Create SSH aliases # $ cat \u0026gt;\u0026gt; ~/.ssh/config \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; Host source-1 HostName source-server-1.example.com User backupuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar Host source-2 HostName source-server-2.example.com User backupuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar EOF Test:\n$ ssh source-1 \u0026#39;echo \u0026#34;Alias works\u0026#34;\u0026#39; Configure Backup Sources # Add source connectors for each server:\n$ plakar source add web-server-1 sftp://source-1:/var/www $ plakar source add web-server-2 sftp://source-2:/var/www Verify:\n$ plakar source show Test Backup # Run a manual backup to verify configuration:\n# Single source $ plakar at \u0026#34;@exoscale-sos-backups\u0026#34; backup \u0026#34;@web-server-1\u0026#34; # Multiple sources $ plakar at \u0026#34;@exoscale-sos-backups\u0026#34; backup \u0026#34;@web-server-1\u0026#34; \u0026#34;@web-server-2\u0026#34; List snapshots:\n$ plakar at \u0026#34;@exoscale-sos-backups\u0026#34; ls Schedule Automatic Backups # For scheduler configuration and systemd service setup, follow the same steps as the OVHcloud backup server guide, replacing:\n@ovhcloud-s3-backups with @exoscale-sos-backups ubuntu with your actual username if different The scheduler configuration, systemd services, and web UI setup are identical on any Linux machine.\nTroubleshooting # Authentication errors\nVerify SSH keys and user permissions on source servers Can\u0026rsquo;t connect to Object Storage\nCheck S3 credentials and endpoint URL Verify passphrase: plakar store show exoscale-sos-backups Confirm bucket name and zone endpoint match Permission denied\nEnsure SSH user has read access to backup directories Services won\u0026rsquo;t start\nCheck status: systemctl status plakar-scheduler View logs: journalctl -u plakar-scheduler or journalctl -u plakar-ui Alternative UI access\nInstall Plakar locally and configure the same store with Exoscale SOS credentials to access backups without compute instance connection ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/guides/exoscale/exoscale-compute-as-a-dedicated-backup-server/","section":"Docs","summary":"Back up Exoscale compute servers to Exoscale Object Storage using a dedicated compute instance.","title":"Using Exoscale Compute as a Dedicated Backup Server","type":"docs"},{"content":" Using Exoscale Compute as a Dedicated Backup Server # This guide configures an Exoscale Compute instance to automatically back up your servers to Exoscale Object Storage (SOS). The setup uses Plakar to create encrypted, deduplicated snapshots on a scheduled interval with web UI monitoring.\nArchitecture # Backup Compute: Runs Plakar and schedules backups Source servers: Exoscale servers to back up Exoscale Object Storage (SOS): Stores encrypted backups flowchart TB subgraph Sources[\"Source Servers\"] Server1[\"Web Server 1\"] Server2[\"Web Server 2\"] ServerN[\"Server N\"] end BackupCompute[\"Backup ComputePlakar + Scheduler\"] subgraph Storage[\"Exoscale Object Storage\"] Kloset[\"Kloset StoreEncrypted \u0026 DeduplicatedBackup\"] end Server1 --\u003e|SSH/SFTP| BackupCompute Server2 --\u003e|SSH/SFTP| BackupCompute ServerN --\u003e|SSH/SFTP| BackupCompute BackupCompute --\u003e|Store Snapshots| Kloset Prerequisites # Exoscale account with billing configured SSH keypair for instance access SSH access to source servers Basic familiarity with Plakar commands Create Object Storage Bucket # Create bucket in Exoscale Portal # In the Exoscale portal, navigate to Storage Click Add to create a new bucket Configure: Zone: Select region (note the name, it\u0026rsquo;ll be used to connect to the container e.g ch-dk-2) Name: plakar-backups (must be globally unique) Click Add Generate IAM API Keys # In the Exoscale portal, navigate to IAM → Keys Click on Add to create new API keys, then provide a name and role, then click Create. Copy the key and secret to a secure environment (you cannot see the secret once you leave the page) Create SSH Keypair # Generate SSH key locally and copy the public key: $ ssh-keygen -t ed25519 -f ~/.ssh/id_exoscale -C \u0026#34;exoscale-backup\u0026#34; $ cat ~/.ssh/id_exoscale.pub In the Exoscale portal, navigate to Compute → SSH Keys Click on Add then enter a name for the SSH Key and paste in the public key then click Import. Provision Backup Compute Instance # Create compute instance # In the Exoscale Portal, navigate to Compute → Instances Click Add Configure: Name: plakar-backup Template: Ubuntu 24.04 LTS Zone: Same as Object Storage bucket (recommended for better performance) Instance Type: Small (2 vCPUs, 2 GB RAM) or any other you prefer SSH Key: Select the SSH key we created before from the dropdown Click Add to provision your compute Setup security group rules # Once the compute is provisioned, navigate to Compute → Security Groups By default the compute will be assigned the default security group, click on the actions on default then click on Details On the next page click on Add Rule Configure: Flow direction: Ingress Protocol: TCP Source Type: CIDR Sources: 0.0.0.0/0 allows SSH from anywhere (for better security use your IP Address here) Start \u0026amp; End port: 22 A description for the rule e.g. SSH Access Click on Create Initial connection # Once instance is running, note the public IP and connect:\n$ ssh ubuntu@\u0026lt;instance-ip\u0026gt; Install Plakar # Install Plakar on the instance using the Plakar Installation Guide\nConfigure Object Storage # Install S3 integration # $ plakar login -email you@example.com $ plakar pkg add s3 Add storage connector # $ plakar store add exoscale-sos-backups \\ location=s3://\u0026lt;SOS_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_KEY\u0026gt; \\ use_tls=true \\ passphrase=\u0026#39;\u0026lt;YOUR_SECURE_PASSPHRASE\u0026gt;\u0026#39; Replace:\n\u0026lt;SOS_ENDPOINT\u0026gt;: using the format sos-{zone}.exo.io where zone is one we selected for the bucket e.g sos-ch-dk-2.exo.io \u0026lt;BUCKET_NAME\u0026gt;: e.g., plakar-backups \u0026lt;YOUR_ACCESS_KEY\u0026gt; and \u0026lt;YOUR_SECRET_KEY\u0026gt;: From bucket API credentials \u0026lt;YOUR_SECURE_PASSPHRASE\u0026gt;: Strong passphrase for encryption Passphrase Configuring the passphrase in the store enables automated backups without prompts.\nInitialize Kloset Store # $ plakar at \u0026#34;@exoscale-sos-backups\u0026#34; create Configure SSH Access to Source Servers # Install SFTP integration # $ plakar pkg add sftp Generate SSH keys for backups # $ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_plakar -C \u0026#34;plakar@backup\u0026#34; Press Enter for no passphrase.\nCopy keys to source servers # $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub user@source-server-1 $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub user@source-server-2 Test access:\n$ ssh -i ~/.ssh/id_ed25519_plakar user@source-server-1 \u0026#39;echo \u0026#34;Success\u0026#34;\u0026#39; Create SSH aliases # $ cat \u0026gt;\u0026gt; ~/.ssh/config \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; Host source-1 HostName source-server-1.example.com User backupuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar Host source-2 HostName source-server-2.example.com User backupuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar EOF Test:\n$ ssh source-1 \u0026#39;echo \u0026#34;Alias works\u0026#34;\u0026#39; Configure Backup Sources # Add source connectors for each server:\n$ plakar source add web-server-1 sftp://source-1:/var/www $ plakar source add web-server-2 sftp://source-2:/var/www Verify:\n$ plakar source show Test Backup # Run a manual backup to verify configuration:\n# Single source $ plakar at \u0026#34;@exoscale-sos-backups\u0026#34; backup \u0026#34;@web-server-1\u0026#34; # Multiple sources $ plakar at \u0026#34;@exoscale-sos-backups\u0026#34; backup \u0026#34;@web-server-1\u0026#34; \u0026#34;@web-server-2\u0026#34; List snapshots:\n$ plakar at \u0026#34;@exoscale-sos-backups\u0026#34; ls Schedule Automatic Backups # For scheduler configuration and systemd service setup, follow the same steps as the OVHcloud backup server guide, replacing:\n@ovhcloud-s3-backups with @exoscale-sos-backups ubuntu with your actual username if different The scheduler configuration, systemd services, and web UI setup are identical on any Linux machine.\nTroubleshooting # Authentication errors\nVerify SSH keys and user permissions on source servers Can\u0026rsquo;t connect to Object Storage\nCheck S3 credentials and endpoint URL Verify passphrase: plakar store show exoscale-sos-backups Confirm bucket name and zone endpoint match Permission denied\nEnsure SSH user has read access to backup directories Services won\u0026rsquo;t start\nCheck status: systemctl status plakar-scheduler View logs: journalctl -u plakar-scheduler or journalctl -u plakar-ui Alternative UI access\nInstall Plakar locally and configure the same store with Exoscale SOS credentials to access backups without compute instance connection ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/exoscale/exoscale-compute-as-a-dedicated-backup-server/","section":"Docs","summary":"Back up Exoscale compute servers to Exoscale Object Storage using a dedicated compute instance.","title":"Using Exoscale Compute as a Dedicated Backup Server","type":"docs"},{"content":" Using Exoscale Compute as a Dedicated Backup Server # This guide configures an Exoscale Compute instance to automatically back up your servers to Exoscale Object Storage (SOS). The setup uses Plakar to create encrypted, deduplicated snapshots on a scheduled interval with web UI monitoring.\nArchitecture # Backup Compute: Runs Plakar and schedules backups Source servers: Exoscale servers to back up Exoscale Object Storage (SOS): Stores encrypted backups flowchart TB subgraph Sources[\"Source Servers\"] Server1[\"Web Server 1\"] Server2[\"Web Server 2\"] ServerN[\"Server N\"] end BackupCompute[\"Backup ComputePlakar + Scheduler\"] subgraph Storage[\"Exoscale Object Storage\"] Kloset[\"Kloset StoreEncrypted \u0026 DeduplicatedBackup\"] end Server1 --\u003e|SSH/SFTP| BackupCompute Server2 --\u003e|SSH/SFTP| BackupCompute ServerN --\u003e|SSH/SFTP| BackupCompute BackupCompute --\u003e|Store Snapshots| Kloset Prerequisites # Exoscale account with billing configured SSH keypair for instance access SSH access to source servers Basic familiarity with Plakar commands Create Object Storage Bucket # Create bucket in Exoscale Portal # In the Exoscale portal, navigate to Storage Click Add to create a new bucket Configure: Zone: Select region (note the name, it\u0026rsquo;ll be used to connect to the container e.g ch-dk-2) Name: plakar-backups (must be globally unique) Click Add Generate IAM API Keys # In the Exoscale portal, navigate to IAM → Keys Click on Add to create new API keys, then provide a name and role, then click Create. Copy the key and secret to a secure environment (you cannot see the secret once you leave the page) Create SSH Keypair # Generate SSH key locally and copy the public key: $ ssh-keygen -t ed25519 -f ~/.ssh/id_exoscale -C \u0026#34;exoscale-backup\u0026#34; $ cat ~/.ssh/id_exoscale.pub In the Exoscale portal, navigate to Compute → SSH Keys Click on Add then enter a name for the SSH Key and paste in the public key then click Import. Provision Backup Compute Instance # Create compute instance # In the Exoscale Portal, navigate to Compute → Instances Click Add Configure: Name: plakar-backup Template: Ubuntu 24.04 LTS Zone: Same as Object Storage bucket (recommended for better performance) Instance Type: Small (2 vCPUs, 2 GB RAM) or any other you prefer SSH Key: Select the SSH key we created before from the dropdown Click Add to provision your compute Setup security group rules # Once the compute is provisioned, navigate to Compute → Security Groups By default the compute will be assigned the default security group, click on the actions on default then click on Details On the next page click on Add Rule Configure: Flow direction: Ingress Protocol: TCP Source Type: CIDR Sources: 0.0.0.0/0 allows SSH from anywhere (for better security use your IP Address here) Start \u0026amp; End port: 22 A description for the rule e.g. SSH Access Click on Create Initial connection # Once instance is running, note the public IP and connect:\n$ ssh ubuntu@\u0026lt;instance-ip\u0026gt; Install Plakar # Install Plakar on the instance using the Plakar Installation Guide\nConfigure Object Storage # Install S3 integration # $ plakar login -email you@example.com $ plakar pkg add s3 Add storage connector # $ plakar store add exoscale-sos-backups \\ location=s3://\u0026lt;SOS_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_KEY\u0026gt; \\ use_tls=true \\ passphrase=\u0026#39;\u0026lt;YOUR_SECURE_PASSPHRASE\u0026gt;\u0026#39; Replace:\n\u0026lt;SOS_ENDPOINT\u0026gt;: using the format sos-{zone}.exo.io where zone is one we selected for the bucket e.g sos-ch-dk-2.exo.io \u0026lt;BUCKET_NAME\u0026gt;: e.g., plakar-backups \u0026lt;YOUR_ACCESS_KEY\u0026gt; and \u0026lt;YOUR_SECRET_KEY\u0026gt;: From bucket API credentials \u0026lt;YOUR_SECURE_PASSPHRASE\u0026gt;: Strong passphrase for encryption Passphrase Configuring the passphrase in the store enables automated backups without prompts.\nInitialize Kloset Store # $ plakar at \u0026#34;@exoscale-sos-backups\u0026#34; create Configure SSH Access to Source Servers # Install SFTP integration # $ plakar pkg add sftp Generate SSH keys for backups # $ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_plakar -C \u0026#34;plakar@backup\u0026#34; Press Enter for no passphrase.\nCopy keys to source servers # $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub user@source-server-1 $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub user@source-server-2 Test access:\n$ ssh -i ~/.ssh/id_ed25519_plakar user@source-server-1 \u0026#39;echo \u0026#34;Success\u0026#34;\u0026#39; Create SSH aliases # $ cat \u0026gt;\u0026gt; ~/.ssh/config \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; Host source-1 HostName source-server-1.example.com User backupuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar Host source-2 HostName source-server-2.example.com User backupuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar EOF Test:\n$ ssh source-1 \u0026#39;echo \u0026#34;Alias works\u0026#34;\u0026#39; Configure Backup Sources # Add source connectors for each server:\n$ plakar source add web-server-1 sftp://source-1:/var/www $ plakar source add web-server-2 sftp://source-2:/var/www Verify:\n$ plakar source show Test Backup # Run a manual backup to verify configuration:\n# Single source $ plakar at \u0026#34;@exoscale-sos-backups\u0026#34; backup \u0026#34;@web-server-1\u0026#34; # Multiple sources $ plakar at \u0026#34;@exoscale-sos-backups\u0026#34; backup \u0026#34;@web-server-1\u0026#34; \u0026#34;@web-server-2\u0026#34; List snapshots:\n$ plakar at \u0026#34;@exoscale-sos-backups\u0026#34; ls Schedule Automatic Backups # For scheduler configuration and systemd service setup, follow the same steps as the OVHcloud backup server guide, replacing:\n@ovhcloud-s3-backups with @exoscale-sos-backups ubuntu with your actual username if different The scheduler configuration, systemd services, and web UI setup are identical on any Linux machine.\nTroubleshooting # Authentication errors\nVerify SSH keys and user permissions on source servers Can\u0026rsquo;t connect to Object Storage\nCheck S3 credentials and endpoint URL Verify passphrase: plakar store show exoscale-sos-backups Confirm bucket name and zone endpoint match Permission denied\nEnsure SSH user has read access to backup directories Services won\u0026rsquo;t start\nCheck status: systemctl status plakar-scheduler View logs: journalctl -u plakar-scheduler or journalctl -u plakar-ui Alternative UI access\nInstall Plakar locally and configure the same store with Exoscale SOS credentials to access backups without compute instance connection ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/exoscale/exoscale-compute-as-a-dedicated-backup-server/","section":"Docs","summary":"Back up Exoscale compute servers to Exoscale Object Storage using a dedicated compute instance.","title":"Using Exoscale Compute as a Dedicated Backup Server","type":"docs"},{"content":" Using OVHcloud VPS as a Dedicated Backup Server # This guide configures an OVHcloud VPS to automatically back up your servers to Object Storage. The setup uses Plakar to create encrypted, deduplicated snapshots on a scheduled interval with web UI monitoring.\nArchitecture # Backup VPS: Runs Plakar and schedules backups Source servers: OVHcloud servers to back up OVHcloud Object Storage: Stores encrypted backups flowchart TB subgraph Sources[\"Source Servers\"] Server1[\"Web Server 1\"] Server2[\"Web Server 2\"] ServerN[\"Server N\"] end BackupVPS[\"Backup VPSPlakar + Scheduler\"] subgraph Storage[\"OVHcloud Object Storage\"] Kloset[\"Kloset StoreEncrypted \u0026 DeduplicatedBackup\"] end Server1 --\u003e|SSH/SFTP| BackupVPS Server2 --\u003e|SSH/SFTP| BackupVPS ServerN --\u003e|SSH/SFTP| BackupVPS BackupVPS --\u003e|Store Snapshots| Kloset Prerequisites # OVHcloud account with billing configured SSH access to source servers Basic familiarity with Plakar commands Create Object Storage # Create storage user # Log in to OVHcloud Control Panel Navigate to Public Cloud → Storage \u0026amp; Backup → Object Storage → Users Click Create User Enter description and click Create Download and store credentials securely Create storage container # Navigate to Public Cloud → Storage \u0026amp; Backup → Object Storage Click Create an Object Storage container Configure: Name: plakar-backups Container API: S3-compatible API User: Select the user created above Deployment: Choose 3-AZ (high availability) or 1-AZ (cost efficient) Region: Select location closest to your servers Click Create Reference: OVHcloud S3 Object Storage documentation\nProvision Backup VPS # Order VPS # Go to Bare Metal Cloud → Dedicated and Virtual Servers → Virtual Private Servers Click Order → Configure your VPS Select configuration: Model: VPS-1 (2 vCores, 8 GB RAM, 75GB Storage) or larger Region: Same as Object Storage Image: Ubuntu 25.04 Complete order Initial connection # Connect using credentials from delivery email:\nssh ubuntu@your_vps_ip Change the temporary password when prompted, then reconnect.\nReference: OVHcloud VPS Getting Started guide\nInstall Plakar # SSH to the backup VPS and install Plakar:\nssh ubuntu@your-vps-ip Follow the Plakar Installation Guide\nConfigure Object Storage # Install S3 integration # $ plakar pkg add s3 Add storage connector # $ plakar store add ovhcloud-s3-backups \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true \\ passphrase=\u0026#39;\u0026lt;YOUR_SECURE_PASSPHRASE\u0026gt;\u0026#39; Replace:\n\u0026lt;S3_ENDPOINT\u0026gt;: e.g., s3.eu-west-par.io.cloud.ovh.net \u0026lt;BUCKET_NAME\u0026gt;: e.g., plakar-backups \u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; and \u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt;: From Step 1 \u0026lt;YOUR_SECURE_PASSPHRASE\u0026gt;: Strong passphrase for encryption Passphrase Configuring the passphrase in the store enables automated backups without prompts.\nInitialize Kloset Store # $ plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; create Configure SSH Access # Install SFTP integration # $ plakar pkg add sftp Generate SSH keys # $ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_plakar -C \u0026#34;plakar@backup\u0026#34; Press Enter for no passphrase.\nCopy keys to source servers # $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub user@source-server-1 $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub user@source-server-2 Test access:\n$ ssh -i ~/.ssh/id_ed25519_plakar user@source-server-1 \u0026#39;echo \u0026#34;Success\u0026#34;\u0026#39; Create SSH aliases # $ cat \u0026gt;\u0026gt; ~/.ssh/config \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; Host source-1 HostName source-server-1.example.com User backupuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar Host source-2 HostName source-server-2.example.com User backupuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar EOF Test:\n$ ssh source-1 \u0026#39;echo \u0026#34;Alias works\u0026#34;\u0026#39; Configure Backup Sources # Add source connectors for each server:\n$ plakar source add web-server-1 sftp://source-1:/var/www $ plakar source add web-server-2 sftp://source-2:/var/www Verify:\n$ plakar source show Test Backup # Run a manual backup to verify configuration:\n# Single source $ plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; backup \u0026#34;@web-server-1\u0026#34; # Multiple sources $ plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; backup \u0026#34;@web-server-1\u0026#34; \u0026#34;@web-server-2\u0026#34; List snapshots:\n$ plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; ls Schedule Automatic Backups # Create scheduler configuration # $ cat \u0026gt; ~/scheduler.yaml \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; agent: tasks: - name: Backup web-server-1 repository: \u0026#34;@ovhcloud-s3-backups\u0026#34; backup: path: \u0026#34;@web-server-1\u0026#34; interval: 24h check: true - name: Backup web-server-2 repository: \u0026#34;@ovhcloud-s3-backups\u0026#34; backup: path: \u0026#34;@web-server-2\u0026#34; interval: 24h check: true EOF Scheduler The scheduler is basic and will be improved in future versions.\nStart scheduler # $ plakar scheduler start -tasks ~/scheduler.yaml See Scheduler Documentation for more scheduling options.\nConfigure Systemd Services # Create scheduler service # $ cat \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; | sudo tee /etc/systemd/system/plakar-scheduler.service \u0026gt; /dev/null [Unit] Description=Plakar Scheduler After=network.target [Service] Type=forking ExecStart=/usr/bin/plakar scheduler start -tasks /home/ubuntu/scheduler.yaml ExecStop=/usr/bin/plakar scheduler stop Restart=on-failure User=ubuntu WorkingDirectory=/home/ubuntu [Install] WantedBy=multi-user.target EOF Access Web UI # Option 1: Custom token (recommended) # Update the UI service with a custom token:\n$ cat \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; | sudo tee /etc/systemd/system/plakar-ui.service \u0026gt; /dev/null [Unit] Description=Plakar Web UI After=network.target [Service] Type=simple Environment=\u0026#34;PLAKAR_UI_TOKEN=your-secure-token-here\u0026#34; ExecStart=/usr/bin/plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; ui -listen :8080 Restart=always User=ubuntu WorkingDirectory=/home/ubuntu [Install] WantedBy=multi-user.target EOF Reload and restart:\n$ sudo systemctl daemon-reload $ sudo systemctl restart plakar-ui Access: http://your-vps-ip:8080?plakar_token=your-secure-token-here\nOption 2: Auto-generated token # Retrieve the token from logs:\n$ sudo journalctl -u plakar-ui -n 100 --no-pager | grep -i token Look for:\nlaunching webUI at http://:8080?plakar_token=d9fccdbd-77a3-41a0-8657-24d77a6d00ac Access: http://your-vps-ip:8080 with the token from the URL.\nSecurity Configure firewall to restrict port 8080 access or use a reverse proxy with SSL.\nTroubleshooting # Authentication errors\nVerify SSH keys and user permissions on source servers Can\u0026rsquo;t connect to Object Storage\nCheck S3 credentials and endpoint URL Verify passphrase: plakar store show ovhcloud-s3-backups Permission denied\nEnsure SSH user has read access to backup directories Services won\u0026rsquo;t start\nCheck status: systemctl status plakar-scheduler View logs: journalctl -u plakar-scheduler or journalctl -u plakar-ui Alternative UI access\nInstall Plakar locally and configure the same store with OVHcloud S3 credentials to access backups without VPS connection ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/guides/ovhcloud/ovhcloud-as-a-dedicated-backup-server/","section":"Docs","summary":"Automate backups of OVHcloud servers to Object Storage using a dedicated VPS.","title":"Using OVHcloud VPS as a Dedicated Backup Server","type":"docs"},{"content":" Using OVHcloud VPS as a Dedicated Backup Server # This guide configures an OVHcloud VPS to automatically back up your servers to Object Storage. The setup uses Plakar to create encrypted, deduplicated snapshots on a scheduled interval with web UI monitoring.\nArchitecture # Backup VPS: Runs Plakar and schedules backups Source servers: OVHcloud servers to back up OVHcloud Object Storage: Stores encrypted backups flowchart TB subgraph Sources[\"Source Servers\"] Server1[\"Web Server 1\"] Server2[\"Web Server 2\"] ServerN[\"Server N\"] end BackupVPS[\"Backup VPSPlakar + Scheduler\"] subgraph Storage[\"OVHcloud Object Storage\"] Kloset[\"Kloset StoreEncrypted \u0026 DeduplicatedBackup\"] end Server1 --\u003e|SSH/SFTP| BackupVPS Server2 --\u003e|SSH/SFTP| BackupVPS ServerN --\u003e|SSH/SFTP| BackupVPS BackupVPS --\u003e|Store Snapshots| Kloset Prerequisites # OVHcloud account with billing configured SSH access to source servers Basic familiarity with Plakar commands Create Object Storage # Create storage user # Log in to OVHcloud Control Panel Navigate to Public Cloud → Storage \u0026amp; Backup → Object Storage → Users Click Create User Enter description and click Create Download and store credentials securely Create storage container # Navigate to Public Cloud → Storage \u0026amp; Backup → Object Storage Click Create an Object Storage container Configure: Name: plakar-backups Container API: S3-compatible API User: Select the user created above Deployment: Choose 3-AZ (high availability) or 1-AZ (cost efficient) Region: Select location closest to your servers Click Create Reference: OVHcloud S3 Object Storage documentation\nProvision Backup VPS # Order VPS # Go to Bare Metal Cloud → Dedicated and Virtual Servers → Virtual Private Servers Click Order → Configure your VPS Select configuration: Model: VPS-1 (2 vCores, 8 GB RAM, 75GB Storage) or larger Region: Same as Object Storage Image: Ubuntu 25.04 Complete order Initial connection # Connect using credentials from delivery email:\nssh ubuntu@your_vps_ip Change the temporary password when prompted, then reconnect.\nReference: OVHcloud VPS Getting Started guide\nInstall Plakar # SSH to the backup VPS and install Plakar:\nssh ubuntu@your-vps-ip Follow the Plakar Installation Guide\nConfigure Object Storage # Install S3 integration # $ plakar pkg add s3 Add storage connector # $ plakar store add ovhcloud-s3-backups \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true \\ passphrase=\u0026#39;\u0026lt;YOUR_SECURE_PASSPHRASE\u0026gt;\u0026#39; Replace:\n\u0026lt;S3_ENDPOINT\u0026gt;: e.g., s3.eu-west-par.io.cloud.ovh.net \u0026lt;BUCKET_NAME\u0026gt;: e.g., plakar-backups \u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; and \u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt;: From Step 1 \u0026lt;YOUR_SECURE_PASSPHRASE\u0026gt;: Strong passphrase for encryption Passphrase Configuring the passphrase in the store enables automated backups without prompts.\nInitialize Kloset Store # $ plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; create Configure SSH Access # Install SFTP integration # $ plakar pkg add sftp Generate SSH keys # $ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_plakar -C \u0026#34;plakar@backup\u0026#34; Press Enter for no passphrase.\nCopy keys to source servers # $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub user@source-server-1 $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub user@source-server-2 Test access:\n$ ssh -i ~/.ssh/id_ed25519_plakar user@source-server-1 \u0026#39;echo \u0026#34;Success\u0026#34;\u0026#39; Create SSH aliases # $ cat \u0026gt;\u0026gt; ~/.ssh/config \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; Host source-1 HostName source-server-1.example.com User backupuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar Host source-2 HostName source-server-2.example.com User backupuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar EOF Test:\n$ ssh source-1 \u0026#39;echo \u0026#34;Alias works\u0026#34;\u0026#39; Configure Backup Sources # Add source connectors for each server:\n$ plakar source add web-server-1 sftp://source-1:/var/www $ plakar source add web-server-2 sftp://source-2:/var/www Verify:\n$ plakar source show Test Backup # Run a manual backup to verify configuration:\n# Single source $ plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; backup \u0026#34;@web-server-1\u0026#34; # Multiple sources $ plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; backup \u0026#34;@web-server-1\u0026#34; \u0026#34;@web-server-2\u0026#34; List snapshots:\n$ plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; ls Schedule Automatic Backups # Create scheduler configuration # $ cat \u0026gt; ~/scheduler.yaml \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; agent: tasks: - name: Backup web-server-1 repository: \u0026#34;@ovhcloud-s3-backups\u0026#34; backup: path: \u0026#34;@web-server-1\u0026#34; interval: 24h check: true - name: Backup web-server-2 repository: \u0026#34;@ovhcloud-s3-backups\u0026#34; backup: path: \u0026#34;@web-server-2\u0026#34; interval: 24h check: true EOF Scheduler The scheduler is basic and will be improved in future versions.\nStart scheduler # $ plakar scheduler start -tasks ~/scheduler.yaml See Scheduler Documentation for more scheduling options.\nConfigure Systemd Services # Create scheduler service # $ cat \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; | sudo tee /etc/systemd/system/plakar-scheduler.service \u0026gt; /dev/null [Unit] Description=Plakar Scheduler After=network.target [Service] Type=forking ExecStart=/usr/bin/plakar scheduler start -tasks /home/ubuntu/scheduler.yaml ExecStop=/usr/bin/plakar scheduler stop Restart=on-failure User=ubuntu WorkingDirectory=/home/ubuntu [Install] WantedBy=multi-user.target EOF Create UI service # $ cat \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; | sudo tee /etc/systemd/system/plakar-ui.service \u0026gt; /dev/null [Unit] Description=Plakar Web UI After=network.target [Service] Type=simple ExecStart=/usr/bin/plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; ui -listen :8080 Restart=always User=ubuntu WorkingDirectory=/home/ubuntu [Install] WantedBy=multi-user.target EOF Installation Path If Plakar is installed elsewhere, update the path. Use which plakar to find it.\nEnable and start services # $ sudo systemctl daemon-reload $ sudo systemctl enable plakar-scheduler plakar-ui $ sudo systemctl start plakar-scheduler plakar-ui Check status:\n$ sudo systemctl status plakar-scheduler $ sudo systemctl status plakar-ui Access Web UI # When the UI service starts, Plakar automatically generates an access token. Retrieve it from the service logs:\n$ sudo journalctl -u plakar-ui -n 100 --no-pager | grep -i token You should see output similar to:\nlaunching webUI at http://:8080?plakar_token=d9fccdbd-77a3-41a0-8657-24d77a6d00ac Copy the plakar_token value from the URL and use it to access the UI: http://your-vps-ip:8080?plakar_token=\u0026lt;token\u0026gt;\nCustom UI Token From v1.1.0 onwards, you can set a custom token via the PLAKAR_UI_TOKEN environment variable instead of retrieving it from the logs. See the v1.1.0 version of this guide for details.\nSecurity Configure firewall to restrict port 8080 access or use a reverse proxy with SSL.\nTroubleshooting # Authentication errors\nVerify SSH keys and user permissions on source servers Can\u0026rsquo;t connect to Object Storage\nCheck S3 credentials and endpoint URL Verify passphrase: plakar store show ovhcloud-s3-backups Permission denied\nEnsure SSH user has read access to backup directories Services won\u0026rsquo;t start\nCheck status: systemctl status plakar-scheduler View logs: journalctl -u plakar-scheduler or journalctl -u plakar-ui Alternative UI access\nInstall Plakar locally and configure the same store with OVHcloud S3 credentials to access backups without VPS connection ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/ovhcloud/ovhcloud-as-a-dedicated-backup-server/","section":"Docs","summary":"Automate backups of OVHcloud servers to Object Storage using a dedicated VPS.","title":"Using OVHcloud VPS as a Dedicated Backup Server","type":"docs"},{"content":" Using OVHcloud VPS as a Dedicated Backup Server # This guide configures an OVHcloud VPS to automatically back up your servers to Object Storage. The setup uses Plakar to create encrypted, deduplicated snapshots on a scheduled interval with web UI monitoring.\nArchitecture # Backup VPS: Runs Plakar and schedules backups Source servers: OVHcloud servers to back up OVHcloud Object Storage: Stores encrypted backups flowchart TB subgraph Sources[\"Source Servers\"] Server1[\"Web Server 1\"] Server2[\"Web Server 2\"] ServerN[\"Server N\"] end BackupVPS[\"Backup VPSPlakar + Scheduler\"] subgraph Storage[\"OVHcloud Object Storage\"] Kloset[\"Kloset StoreEncrypted \u0026 DeduplicatedBackup\"] end Server1 --\u003e|SSH/SFTP| BackupVPS Server2 --\u003e|SSH/SFTP| BackupVPS ServerN --\u003e|SSH/SFTP| BackupVPS BackupVPS --\u003e|Store Snapshots| Kloset Prerequisites # OVHcloud account with billing configured SSH access to source servers Basic familiarity with Plakar commands Create Object Storage # Create storage user # Log in to OVHcloud Control Panel Navigate to Public Cloud → Storage \u0026amp; Backup → Object Storage → Users Click Create User Enter description and click Create Download and store credentials securely Create storage container # Navigate to Public Cloud → Storage \u0026amp; Backup → Object Storage Click Create an Object Storage container Configure: Name: plakar-backups Container API: S3-compatible API User: Select the user created above Deployment: Choose 3-AZ (high availability) or 1-AZ (cost efficient) Region: Select location closest to your servers Click Create Reference: OVHcloud S3 Object Storage documentation\nProvision Backup VPS # Order VPS # Go to Bare Metal Cloud → Dedicated and Virtual Servers → Virtual Private Servers Click Order → Configure your VPS Select configuration: Model: VPS-1 (2 vCores, 8 GB RAM, 75GB Storage) or larger Region: Same as Object Storage Image: Ubuntu 25.04 Complete order Initial connection # Connect using credentials from delivery email:\nssh ubuntu@your_vps_ip Change the temporary password when prompted, then reconnect.\nReference: OVHcloud VPS Getting Started guide\nInstall Plakar # SSH to the backup VPS and install Plakar:\nssh ubuntu@your-vps-ip Follow the Plakar Installation Guide\nConfigure Object Storage # Install S3 integration # $ plakar pkg add s3 Add storage connector # $ plakar store add ovhcloud-s3-backups \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; \\ secret_access_key=\u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt; \\ use_tls=true \\ passphrase=\u0026#39;\u0026lt;YOUR_SECURE_PASSPHRASE\u0026gt;\u0026#39; Replace:\n\u0026lt;S3_ENDPOINT\u0026gt;: e.g., s3.eu-west-par.io.cloud.ovh.net \u0026lt;BUCKET_NAME\u0026gt;: e.g., plakar-backups \u0026lt;YOUR_ACCESS_KEY_ID\u0026gt; and \u0026lt;YOUR_SECRET_ACCESS_KEY\u0026gt;: From Step 1 \u0026lt;YOUR_SECURE_PASSPHRASE\u0026gt;: Strong passphrase for encryption Passphrase Configuring the passphrase in the store enables automated backups without prompts.\nInitialize Kloset Store # $ plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; create Configure SSH Access # Install SFTP integration # $ plakar pkg add sftp Generate SSH keys # $ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_plakar -C \u0026#34;plakar@backup\u0026#34; Press Enter for no passphrase.\nCopy keys to source servers # $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub user@source-server-1 $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub user@source-server-2 Test access:\n$ ssh -i ~/.ssh/id_ed25519_plakar user@source-server-1 \u0026#39;echo \u0026#34;Success\u0026#34;\u0026#39; Create SSH aliases # $ cat \u0026gt;\u0026gt; ~/.ssh/config \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; Host source-1 HostName source-server-1.example.com User backupuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar Host source-2 HostName source-server-2.example.com User backupuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar EOF Test:\n$ ssh source-1 \u0026#39;echo \u0026#34;Alias works\u0026#34;\u0026#39; Configure Backup Sources # Add source connectors for each server:\n$ plakar source add web-server-1 sftp://source-1:/var/www $ plakar source add web-server-2 sftp://source-2:/var/www Verify:\n$ plakar source show Test Backup # Run a manual backup to verify configuration:\n# Single source $ plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; backup \u0026#34;@web-server-1\u0026#34; # Multiple sources $ plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; backup \u0026#34;@web-server-1\u0026#34; \u0026#34;@web-server-2\u0026#34; List snapshots:\n$ plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; ls Schedule Automatic Backups # Create scheduler configuration # $ cat \u0026gt; ~/scheduler.yaml \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; agent: tasks: - name: Backup web-server-1 repository: \u0026#34;@ovhcloud-s3-backups\u0026#34; backup: path: \u0026#34;@web-server-1\u0026#34; interval: 24h check: true - name: Backup web-server-2 repository: \u0026#34;@ovhcloud-s3-backups\u0026#34; backup: path: \u0026#34;@web-server-2\u0026#34; interval: 24h check: true EOF Scheduler The scheduler is basic and will be improved in future versions.\nStart scheduler # $ plakar scheduler start -tasks ~/scheduler.yaml See Scheduler Documentation for more scheduling options.\nConfigure Systemd Services # Create scheduler service # $ cat \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; | sudo tee /etc/systemd/system/plakar-scheduler.service \u0026gt; /dev/null [Unit] Description=Plakar Scheduler After=network.target [Service] Type=forking ExecStart=/usr/bin/plakar scheduler start -tasks /home/ubuntu/scheduler.yaml ExecStop=/usr/bin/plakar scheduler stop Restart=on-failure User=ubuntu WorkingDirectory=/home/ubuntu [Install] WantedBy=multi-user.target EOF Access Web UI # Option 1: Custom token (recommended) # Update the UI service with a custom token:\n$ cat \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; | sudo tee /etc/systemd/system/plakar-ui.service \u0026gt; /dev/null [Unit] Description=Plakar Web UI After=network.target [Service] Type=simple Environment=\u0026#34;PLAKAR_UI_TOKEN=your-secure-token-here\u0026#34; ExecStart=/usr/bin/plakar at \u0026#34;@ovhcloud-s3-backups\u0026#34; ui -listen :8080 Restart=always User=ubuntu WorkingDirectory=/home/ubuntu [Install] WantedBy=multi-user.target EOF Reload and restart:\n$ sudo systemctl daemon-reload $ sudo systemctl restart plakar-ui Access: http://your-vps-ip:8080?plakar_token=your-secure-token-here\nOption 2: Auto-generated token # Retrieve the token from logs:\n$ sudo journalctl -u plakar-ui -n 100 --no-pager | grep -i token Look for:\nlaunching webUI at http://:8080?plakar_token=d9fccdbd-77a3-41a0-8657-24d77a6d00ac Access: http://your-vps-ip:8080 with the token from the URL.\nSecurity Configure firewall to restrict port 8080 access or use a reverse proxy with SSL.\nTroubleshooting # Authentication errors\nVerify SSH keys and user permissions on source servers Can\u0026rsquo;t connect to Object Storage\nCheck S3 credentials and endpoint URL Verify passphrase: plakar store show ovhcloud-s3-backups Permission denied\nEnsure SSH user has read access to backup directories Services won\u0026rsquo;t start\nCheck status: systemctl status plakar-scheduler View logs: journalctl -u plakar-scheduler or journalctl -u plakar-ui Alternative UI access\nInstall Plakar locally and configure the same store with OVHcloud S3 credentials to access backups without VPS connection ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/ovhcloud/ovhcloud-as-a-dedicated-backup-server/","section":"Docs","summary":"Automate backups of OVHcloud servers to Object Storage using a dedicated VPS.","title":"Using OVHcloud VPS as a Dedicated Backup Server","type":"docs"},{"content":" Getting Started # This section provides a quick overview to help you get started with Plakar. Whether you\u0026rsquo;re new to backup solutions or just new to Plakar, these resources will guide you through the initial setup and basic operations.\nQuickstart Get started with plakar: installation, creating your first backup, verifying, restoring, and using the UI. This guide helps you quickly set up plakar and perform essential backup operations.\nJoin the Community # Discord: Get help, ask questions, and join live discussions. GitHub: Report bugs, request features, and… don\u0026rsquo;t forget to star the repo! ","date":"18 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/quickstart/","section":"Docs","summary":"","title":"Getting Started","type":"docs"},{"content":" Logical backups with SQL dumps # The Plakar MySQL integration uses mysqldump (MySQL) or mariadb-dump (MariaDB) to produce logical backups. These are standard SQL files that are portable, human-readable, and restorable without Plakar if needed.\nTwo URI schemes map to independent sets of binaries:\nProtocol Target Dump tool Restore tool mysql:// MySQL 5.7 / 8.x mysqldump mysql mysql+mariadb:// MariaDB 10.x / 11.x mariadb-dump mariadb For a deeper understanding of logical backups and MySQL backup strategies, refer to the official MySQL documentation on mysqldump.\nRequirements # A running MySQL or MariaDB server. A database user with sufficient privileges (see Required privileges). mysqldump and mysql in $PATH for MySQL, or mariadb-dump and mariadb for MariaDB. Install the package # $ plakar pkg add mysql What gets stored in a snapshot # File Description /manifest.json Server metadata: version, configuration, databases, tables, routines, triggers, events. /\u0026lt;database\u0026gt;.sql Single-database dump (when database is specified). /all.sql Full-server dump (when database is omitted). Back up a single database # # MySQL $ plakar source add mydb mysql://dbuser:secret@db.example.com/mydb $ plakar at /var/backups backup @mydb # MariaDB $ plakar source add mydb mysql+mariadb://dbuser:secret@db.example.com/mydb $ plakar at /var/backups backup @mydb Back up all databases # Omit the database name from the URI to use --all-databases:\n# MySQL $ plakar source add alldb mysql://root:secret@db.example.com $ plakar at /var/backups backup @alldb # MariaDB $ plakar source add alldb mysql+mariadb://root:secret@db.example.com $ plakar at /var/backups backup @alldb Restore a single database # The target database must already exist:\n$ plakar destination add mydbdst mysql://dbuser:secret@target.example.com/mydb $ plakar at /var/backups restore -to @mydbdst \u0026lt;snapshot_id\u0026gt; To have Plakar create the database automatically, set create_db=true:\n$ plakar destination add mydbdst mysql://dbuser:secret@target.example.com/mydb \\ create_db=true $ plakar at /var/backups restore -to @mydbdst \u0026lt;snapshot_id\u0026gt; Restore all databases # $ plakar destination add mydbdst mysql://root:secret@target.example.com $ plakar at /var/backups restore -to @mydbdst \u0026lt;snapshot_id\u0026gt; List snapshots # $ plakar at /var/backups ls Source options # Option Default Description location — Connection URI: mysql://[user[:password]@]host[:port][/database] host 127.0.0.1 Server hostname. Overrides the URI host. port 3306 Server port. Overrides the URI port. username — Username. Overrides the URI user. password — Password. Overrides the URI password. Passed via MYSQL_PWD, never on the command line. database — Database to back up. Overrides the URI path. If omitted, all databases are backed up. single_transaction true Use --single-transaction for a lock-free InnoDB snapshot. routines true Include stored procedures and functions. events true Include event scheduler events. triggers true Include triggers. no_data false Dump schema only, no data. no_create_info false Dump data only, no schema. no_tablespaces true Suppress tablespace statements. hex_blob false Encode BINARY/BLOB columns as hex. ssl_mode — TLS mode: disabled, preferred, required, verify_ca, verify_identity. ssl_cert — Path to the client SSL certificate (PEM). ssl_key — Path to the client SSL private key (PEM). ssl_ca — Path to the CA certificate (PEM). mysql_bin_dir — Directory containing MySQL binaries. MySQL only. column_statistics true Query COLUMN_STATISTICS. Set to false when using mysqldump 8.0 against MySQL 5.7. MySQL only. set_gtid_purged AUTO GTID mode: AUTO, ON, or OFF. MySQL only. mariadb_bin_dir — Directory containing MariaDB binaries. MariaDB only. Destination options # Option Default Description location — Connection URI: mysql://[user[:password]@]host[:port][/database] host 127.0.0.1 Server hostname. Overrides the URI host. port 3306 Server port. Overrides the URI port. username — Username. Overrides the URI user. password — Password. Overrides the URI password. database — Target database. Inferred from the dump filename if omitted. create_db false Issue CREATE DATABASE IF NOT EXISTS before restoring. force false Continue on SQL errors during restore. ssl_mode — TLS mode (same values as source). ssl_cert — Path to the client SSL certificate (PEM). ssl_key — Path to the client SSL private key (PEM). ssl_ca — Path to the CA certificate (PEM). mysql_bin_dir — Directory containing the mysql binary. MySQL only. mariadb_bin_dir — Directory containing the mariadb binary. MariaDB only. Required privileges # Single database backup # GRANT SELECT, SHOW VIEW, TRIGGER, EVENT ON mydb.* TO \u0026#39;backup\u0026#39;@\u0026#39;%\u0026#39;; GRANT PROCESS ON *.* TO \u0026#39;backup\u0026#39;@\u0026#39;%\u0026#39;; With single_transaction=true (default), LOCK TABLES is not required for InnoDB tables.\nAll-databases backup # GRANT SELECT, SHOW VIEW, TRIGGER, LOCK TABLES, EVENT, RELOAD ON *.* TO \u0026#39;backup\u0026#39;@\u0026#39;%\u0026#39;; GRANT PROCESS ON *.* TO \u0026#39;backup\u0026#39;@\u0026#39;%\u0026#39;; Considerations # MySQL vs MariaDB binaries # Always use binaries that match your server. On Debian and Ubuntu, apt install default-mysql-client installs MariaDB\u0026rsquo;s mysqldump by default. MariaDB\u0026rsquo;s mysqldump is not compatible with MySQL 8 for all-databases backups and will produce dumps that fail to restore.\nVerify you have the correct binary:\n$ mysqldump --version # MySQL: mysqldump Ver 8.x Distrib 8.x, for Linux (x86_64) # MariaDB: mysqldump from 11.x.x-MariaDB ... If both are installed, point the integration to the correct directory using mysql_bin_dir.\nInnoDB and MyISAM # single_transaction (enabled by default) produces a consistent InnoDB snapshot without locking tables. For databases with MyISAM tables, single_transaction does not prevent locks on those tables. If you need a consistent backup across MyISAM tables, set single_transaction=false to use --lock-all-tables instead, accepting write locks during the dump.\nGTIDs # When the server has GTIDs enabled, the dump includes SET @@GLOBAL.GTID_PURGED statements that will cause restore to fail on a server that already has GTID history. Set set_gtid_purged=OFF on the source to omit GTID information, or run RESET MASTER on the target before restoring.\nUser and grant migration # Single-database backups do not include user accounts or grants. To migrate users, use an all-databases backup, export grants manually with a tool like pt-show-grants, or recreate accounts manually on the target.\nCompression # Do not enable compression at the dump level. Plakar deduplicates and compresses data automatically. Pre-compressed dumps reduce deduplication effectiveness across snapshots.\nKloset store location # The examples above use /var/backups as the Kloset store. Any supported store backend can be used instead. See Create a Kloset store for details.\nSee also # MySQL integration on GitHub Official mysqldump documentation Official mariadb-dump documentation ","date":"18 March 2026","externalUrl":null,"permalink":"/docs/main/guides/mysql/sqldump/","section":"Docs","summary":"Back up MySQL and MariaDB databases using the Plakar MySQL integration and restore them.","title":"Logical backups with SQL dumps","type":"docs"},{"content":" Logical backups with SQL dumps # Logical backups export database structure (CREATE DATABASE, CREATE TABLE) and content (INSERT statements) using mysqldump. These backups are machine-independent and portable across MySQL versions and architectures.\nFor a deeper understanding of logical backups and MySQL backup strategies, we recommend reading the official MySQL documentation on mysqldump.\nPrerequisites # Running MySQL server MySQL credentials with dump privileges mysqldump and mysql utilities installed Configure Credentials # Set environment variables to avoid exposing credentials on command line:\n$ export MYSQL_HOST=xxxx $ export MYSQL_TCP_PORT=3306 $ export MYSQL_USER=xxxx $ export MYSQL_PWD=xxxx Back Up Single Database # Basic backup # $ mysqldump \u0026lt;dbname\u0026gt; | plakar at /var/backups backup stdin:dump.sql InnoDB with all objects (recommended) # $ mysqldump --single-transaction \\ --routines \\ --triggers \\ --events \\ \u0026lt;dbname\u0026gt; | plakar at /var/backups backup stdin:dump.sql Options:\n--single-transaction: Consistent snapshot without locking tables (InnoDB) --routines: Include stored procedures and functions --triggers: Include table triggers --events: Include scheduled events Back Up All Databases # $ mysqldump --all-databases \\ --single-transaction \\ --routines \\ --triggers \\ --events \\ --set-gtid-purged=OFF | \\ plakar at /var/backups backup stdin:all_databases.sql The --set-gtid-purged=OFF option improves portability across MySQL configurations.\nRestore Database # Single database # $ plakar at /var/backups cat \u0026lt;SNAPSHOT_ID\u0026gt;:dump.sql | mysql \u0026lt;dbname\u0026gt; All databases # $ plakar at /var/backups cat \u0026lt;SNAPSHOT_ID\u0026gt;:all_databases.sql | mysql List snapshots:\n$ plakar at /var/backups ls Mixed Storage Engines # For databases using both InnoDB and MyISAM, use --lock-all-tables:\n$ mysqldump --all-databases --lock-all-tables | \\ plakar at /var/backups backup stdin:dump.sql This blocks all write operations during the dump.\nBest Practices # Credentials # Use environment variables or ~/.my.cnf Never pass passwords with -p\u0026lt;password\u0026gt; on command line (exposes in process listings) Compression # Do not compress dumps manually Plakar automatically deduplicates and compresses data Pre-compressed dumps prevent effective deduplication Storage Engines # Use --single-transaction for InnoDB (default since MySQL 5.5) Use --lock-all-tables for mixed InnoDB/MyISAM environments ","date":"18 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/mysql/sqldump/","section":"Docs","summary":"Back up MySQL databases using mysqldump and restore from these backups.","title":"Logical backups with SQL dumps","type":"docs"},{"content":" Logical backups with SQL dumps # The Plakar MySQL integration uses mysqldump (MySQL) or mariadb-dump (MariaDB) to produce logical backups. These are standard SQL files that are portable, human-readable, and restorable without Plakar if needed.\nTwo URI schemes map to independent sets of binaries:\nProtocol Target Dump tool Restore tool mysql:// MySQL 5.7 / 8.x mysqldump mysql mysql+mariadb:// MariaDB 10.x / 11.x mariadb-dump mariadb For a deeper understanding of logical backups and MySQL backup strategies, refer to the official MySQL documentation on mysqldump.\nRequirements # A running MySQL or MariaDB server. A database user with sufficient privileges (see Required privileges). mysqldump and mysql in $PATH for MySQL, or mariadb-dump and mariadb for MariaDB. Install the package # $ plakar pkg add mysql What gets stored in a snapshot # File Description /manifest.json Server metadata: version, configuration, databases, tables, routines, triggers, events. /\u0026lt;database\u0026gt;.sql Single-database dump (when database is specified). /all.sql Full-server dump (when database is omitted). Back up a single database # # MySQL $ plakar source add mydb mysql://dbuser:secret@db.example.com/mydb $ plakar at /var/backups backup @mydb # MariaDB $ plakar source add mydb mysql+mariadb://dbuser:secret@db.example.com/mydb $ plakar at /var/backups backup @mydb Back up all databases # Omit the database name from the URI to use --all-databases:\n# MySQL $ plakar source add alldb mysql://root:secret@db.example.com $ plakar at /var/backups backup @alldb # MariaDB $ plakar source add alldb mysql+mariadb://root:secret@db.example.com $ plakar at /var/backups backup @alldb Restore a single database # The target database must already exist:\n$ plakar destination add mydbdst mysql://dbuser:secret@target.example.com/mydb $ plakar at /var/backups restore -to @mydbdst \u0026lt;snapshot_id\u0026gt; To have Plakar create the database automatically, set create_db=true:\n$ plakar destination add mydbdst mysql://dbuser:secret@target.example.com/mydb \\ create_db=true $ plakar at /var/backups restore -to @mydbdst \u0026lt;snapshot_id\u0026gt; Restore all databases # $ plakar destination add mydbdst mysql://root:secret@target.example.com $ plakar at /var/backups restore -to @mydbdst \u0026lt;snapshot_id\u0026gt; List snapshots # $ plakar at /var/backups ls Source options # Option Default Description location — Connection URI: mysql://[user[:password]@]host[:port][/database] host 127.0.0.1 Server hostname. Overrides the URI host. port 3306 Server port. Overrides the URI port. username — Username. Overrides the URI user. password — Password. Overrides the URI password. Passed via MYSQL_PWD, never on the command line. database — Database to back up. Overrides the URI path. If omitted, all databases are backed up. single_transaction true Use --single-transaction for a lock-free InnoDB snapshot. routines true Include stored procedures and functions. events true Include event scheduler events. triggers true Include triggers. no_data false Dump schema only, no data. no_create_info false Dump data only, no schema. no_tablespaces true Suppress tablespace statements. hex_blob false Encode BINARY/BLOB columns as hex. ssl_mode — TLS mode: disabled, preferred, required, verify_ca, verify_identity. ssl_cert — Path to the client SSL certificate (PEM). ssl_key — Path to the client SSL private key (PEM). ssl_ca — Path to the CA certificate (PEM). mysql_bin_dir — Directory containing MySQL binaries. MySQL only. column_statistics true Query COLUMN_STATISTICS. Set to false when using mysqldump 8.0 against MySQL 5.7. MySQL only. set_gtid_purged AUTO GTID mode: AUTO, ON, or OFF. MySQL only. mariadb_bin_dir — Directory containing MariaDB binaries. MariaDB only. Destination options # Option Default Description location — Connection URI: mysql://[user[:password]@]host[:port][/database] host 127.0.0.1 Server hostname. Overrides the URI host. port 3306 Server port. Overrides the URI port. username — Username. Overrides the URI user. password — Password. Overrides the URI password. database — Target database. Inferred from the dump filename if omitted. create_db false Issue CREATE DATABASE IF NOT EXISTS before restoring. force false Continue on SQL errors during restore. ssl_mode — TLS mode (same values as source). ssl_cert — Path to the client SSL certificate (PEM). ssl_key — Path to the client SSL private key (PEM). ssl_ca — Path to the CA certificate (PEM). mysql_bin_dir — Directory containing the mysql binary. MySQL only. mariadb_bin_dir — Directory containing the mariadb binary. MariaDB only. Required privileges # Single database backup # GRANT SELECT, SHOW VIEW, TRIGGER, EVENT ON mydb.* TO \u0026#39;backup\u0026#39;@\u0026#39;%\u0026#39;; GRANT PROCESS ON *.* TO \u0026#39;backup\u0026#39;@\u0026#39;%\u0026#39;; With single_transaction=true (default), LOCK TABLES is not required for InnoDB tables.\nAll-databases backup # GRANT SELECT, SHOW VIEW, TRIGGER, LOCK TABLES, EVENT, RELOAD ON *.* TO \u0026#39;backup\u0026#39;@\u0026#39;%\u0026#39;; GRANT PROCESS ON *.* TO \u0026#39;backup\u0026#39;@\u0026#39;%\u0026#39;; Considerations # MySQL vs MariaDB binaries # Always use binaries that match your server. On Debian and Ubuntu, apt install default-mysql-client installs MariaDB\u0026rsquo;s mysqldump by default. MariaDB\u0026rsquo;s mysqldump is not compatible with MySQL 8 for all-databases backups and will produce dumps that fail to restore.\nVerify you have the correct binary:\n$ mysqldump --version # MySQL: mysqldump Ver 8.x Distrib 8.x, for Linux (x86_64) # MariaDB: mysqldump from 11.x.x-MariaDB ... If both are installed, point the integration to the correct directory using mysql_bin_dir.\nInnoDB and MyISAM # single_transaction (enabled by default) produces a consistent InnoDB snapshot without locking tables. For databases with MyISAM tables, single_transaction does not prevent locks on those tables. If you need a consistent backup across MyISAM tables, set single_transaction=false to use --lock-all-tables instead, accepting write locks during the dump.\nGTIDs # When the server has GTIDs enabled, the dump includes SET @@GLOBAL.GTID_PURGED statements that will cause restore to fail on a server that already has GTID history. Set set_gtid_purged=OFF on the source to omit GTID information, or run RESET MASTER on the target before restoring.\nUser and grant migration # Single-database backups do not include user accounts or grants. To migrate users, use an all-databases backup, export grants manually with a tool like pt-show-grants, or recreate accounts manually on the target.\nCompression # Do not enable compression at the dump level. Plakar deduplicates and compresses data automatically. Pre-compressed dumps reduce deduplication effectiveness across snapshots.\nKloset store location # The examples above use /var/backups as the Kloset store. Any supported store backend can be used instead. See Create a Kloset store for details.\nSee also # MySQL integration on GitHub Official mysqldump documentation Official mariadb-dump documentation ","date":"18 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/mysql/sqldump/","section":"Docs","summary":"Back up MySQL and MariaDB databases using the Plakar MySQL integration and restore them.","title":"Logical backups with SQL dumps","type":"docs"},{"content":" Quickstart # Welcome to Plakar - easy, secure and efficient backups for people who value their time and data. The aim of this quick guide is to get you up and running with Plakar and create your first backup within minutes. Let\u0026rsquo;s get started!\nWhat you will need # an internet connection a Linux, macOS, Windows, FreeBSD or OpenBSD machine to run the software admin access to install sufficient local storage to store your backups a web browser (for logging in and using the UI) Install plakar # Debian/Ubuntu (APT) RPM-based (DNF) macOS (Homebrew) Windows Go Install Others For Debian-based operating systems (such as Ubuntu or Debian), the easiest way is to use our APT repository. First, install necessary dependencies and add the repository\u0026rsquo;s GPG key:\n$ sudo apt-get update $ sudo apt-get install -y curl gnupg2 $ curl -fsSL https://plakar.io/dist/keys/community-v1.0.0.gpg | sudo gpg --dearmor -o /usr/share/keyrings/plakar.gpg $ echo \u0026#34;deb [signed-by=/usr/share/keyrings/plakar.gpg] https://plakar.io/dist/repos/deb/ stable main\u0026#34; | sudo tee /etc/apt/sources.list.d/plakar.list Then update the package list and install plakar:\n$ sudo apt-get update $ sudo apt-get install plakar For operating systems which use RPM-based packages (such as Fedora), the easiest way is to use our DNF repository.\nFirst, set up the repository:\n$ cat \u0026lt;\u0026lt;EOF | sudo tee /etc/yum.repos.d/plakar.repo [plakar] name=Plakar Repository baseurl=https://plakar.io/dist/repos/rpm/$(uname -m)/ enabled=1 gpgcheck=0 gpgkey=https://plakar.io/dist/keys/community-v1.0.0.gpg EOF Then install plakar with:\n$ sudo dnf install plakar The simplest way to install Plakar on macOS is with Homebrew. Ensure you have Homebrew installed, then add the Plakar tap and install Plakar with:\n$ brew install plakarkorp/tap/plakar If you prefer not to use our tap, you can install from the default Homebrew repository instead with brew install plakar. Note that the version in the default repository may not always be the latest release.\nmacOS includes built-in protection against untrusted binaries. To allow plakar to run, you will need to explicitly approve it in the Privacy \u0026amp; Security settings.\nThe simplest way to install Plakar on Windows is by downloading the pre-built package from the Download page.\nThe downloaded package is simply an archive containing the executable. Copy this to anywhere on your system PATH, or run it directly from a shell where it is installed.\nTo install using the Go toolchain, use go install with the version you want to install, or latest:\n$ go install \u0026#34;github.com/PlakarKorp/plakar@v1.0.5\u0026#34; This will install the binary into your $GOPATH/bin directory, which you may need to add to your $PATH if it is not already there.\nArch Linux # Plakar is available on the Arch User Repository (AUR). If you use an AUR helper such as yay, you can install it with:\n$ yay -S plakar Building from Source # You can build Plakar from source. You will need:\nGo (Golang) make (available by default on most Linux distributions; on macOS, install the Xcode command line tools with xcode-select --install; on Windows, use WSL or a tool like GnuWin32 Make) Clone the repository and run make:\n$ git clone https://github.com/PlakarKorp/plakar.git $ cd plakar $ make This produces a plakar binary in the current directory. To build a specific release version, check out the corresponding tag before running make:\n$ git checkout v1.0.5 $ make Other Platforms # For other supported operating systems, or for an alternative to the methods mentioned above, it is possible to download pre-built binaries for different platforms and architectures from the Download page.\nThese are in standard formats for the relevant platforms, so consult OS-specific documentation for how to install them.\nVerify the installation by running:\n$ plakar version This should return the expected version number, for example \u0026lsquo;plakar/v1.0.5\u0026rsquo;.\nCreate a Kloset Store # Before we can back up any data, we need to define where the backup will go. In Plakar terms, this storage location is called a \u0026lsquo;Kloset Store\u0026rsquo;. You can find out more about the concept and rationale behind Kloset in this post on our blog.\nFor our first backup, we will create a local Kloset Store on the filesystem of the host OS. In a real backup scenario you would want to create a backup on a different physical device, so substitute in a better location if you have one.\nIn the CLI enter the following command:\n$ plakar at $HOME/backups create Plakar will then ask you to enter a passphrase, and repeat it to confirm.\nDon\u0026rsquo;t Lose or Forget your Passphrase Be extra careful when choosing the passphrase. People with access to the Kloset Store and knowledge of the passphrase can read your backups.\nBy default Plakar will enforce rules on your choice of passphrase to make sure it is complex enough to be secure. To add complexity, use a mixture of upper and lower case characters, numbers and symbols.\nYour passphrase is not stored anywhere and cannot be recovered in case of loss. A lost passphrase means the data within the repository can no longer be accessed or recovered.\nCreate your first backup # Now that we have created the Kloset Store where data will be stored we can use it to create our first backup. Plakar uses the at keyword to specify where a command is to take place.\nTo create a simple example backup, try running:\n$ plakar at $HOME/backups backup /private/etc Plakar will process the files it finds at that location and pass them to the Kloset where they will be chunked and encrypted. The output will indicate the progress:\n9abc3294: OK ✓ /private/etc/ftpusers 9abc3294: OK ✓ /private/etc/asl/com.apple.iokit.power 9abc3294: OK ✓ /private/etc/pam.d/screensaver_new_ctk [...] 9abc3294: OK ✓ /private/etc/apache2 9abc3294: OK ✓ /private/etc 9abc3294: OK ✓ /private 9abc3294: OK ✓ / backup: created unsigned snapshot 9abc3294 of size 3.1 MB in 72.55875ms The output lists the short form of the snapshot\u0026rsquo;s id number. This is used to identify a particular snapshot and is also how you identify the snapshot to use for various Plakar commands.\nThe help command Learning new tools can be confusing. To make things easier, Plakar includes built-in help for all commands. Just use plakar help and then the command you need help with for a full list of options and examples. For example, if you forget what the options are for restoring files from a snapshot: plakar help restore\nYou can verify that the backup exists:\n$ plakar at $HOME/backups ls 2025-09-02T15:38:16Z 9abc3294 3.1 MB 0s /private/etc The output lists the datestamp of the last backup, the short UUID, the size of files backed-up, the time it took to create the backup and the source path of the backup.\nVerify the integrity of the contents:\n$ plakar at $HOME/backups check 9abc3294 9abc3294: ✓ /private/etc/afpovertcp.cfg 9abc3294: ✓ /private/etc/apache2/extra/httpd-autoindex.conf 9abc3294: ✓ /private/etc/apache2/extra/httpd-dav.conf [...] 9abc3294: ✓ /private/etc/xtab 9abc3294: ✓ /private/etc/zshrc 9abc3294: ✓ /private/etc/zshrc_Apple_Terminal 9abc3294: ✓ /private/etc check: verification of 9abc3294:/private/etc completed successfully And restore it to a local directory:\n$ plakar at $HOME/backups restore -to /tmp/restore 9abc3294 In this case we are restoring to temporary storage as it is just a test. The output will list the restored files as it creates them:\n9abc3294: OK ✓ /private/etc/afpovertcp.cfg 9abc3294: OK ✓ /private/etc/apache2/extra/httpd-autoindex.conf 9abc3294: OK ✓ /private/etc/apache2/extra/httpd-dav.conf [...] 9abc3294: OK ✓ /private/etc/xtab 9abc3294: OK ✓ /private/etc/zprofile 9abc3294: OK ✓ /private/etc/zshrc 9abc3294: OK ✓ /private/etc/zshrc_Apple_Terminal restore: restoration of 9abc3294:/private/etc at /tmp/restore completed successfully To verify the files have been re-created, list the directory they were restored to:\n$ ls -l /tmp/restore This will list the restored files. Note that the properties of the restored files, such as the creation date, will be the same as the original files that were backed up:\ntotal 1784 -rw-r--r--@ 1 gilles wheel 515 Feb 19 22:47 afpovertcp.cfg drwxr-xr-x@ 9 gilles wheel 288 Feb 19 22:47 apache2 drwxr-xr-x@ 16 gilles wheel 512 Feb 19 22:47 asl [...] -rw-r--r--@ 1 gilles wheel 0 Feb 19 22:47 xtab -r--r--r--@ 1 gilles wheel 255 Feb 19 22:47 zprofile -r--r--r--@ 1 gilles wheel 3094 Feb 19 22:47 zshrc -rw-r--r--@ 1 gilles wheel 9335 Feb 19 22:47 zshrc_Apple_Terminal Login # By default, Plakar works without requiring you to create an account or log in. You can back up and restore your data with just a few commands, no external services involved.\nHowever, logging in unlocks optional features that improve usability and monitoring, and adds the ability to easily install pre-built integrations. In plakar, an integration is a package which supports an additional protocol as a source, destination or storage method (or all three), such as FTP, Google Cloud Storage or an S3 bucket.\nLogging in is simple and needs only an email address or GitHub account for authentication.\nTo log in using the CLI:\n$ plakar login -email \u0026lt;youremailaddress@example.com\u0026gt; Substitute in your own email address and follow the prompt. You can then check your email and follow the link sent from plakar.io.\nTo check that you are now logged in you can run:\n$ plakar login -status Access the UI # Plakar provides a web interface to view the backups and their content. To start the web interface, run:\n$ plakar at $HOME/backups ui Your default browser will open a new tab. You can navigate through the snapshots, search and view the files, and download them.\nCongratulations! # You have successfully:\ninstalled Plakar created a backup verified it restored files used the graphical UI How long did it take? That\u0026rsquo;s how easy Plakar is for simple, secure backups.\nNext steps # There is plenty more to discover about Plakar. Here are our suggestions on what to try next:\nCreate a schedule for your backups Discover more about the plakar command line syntax Learn more about why one backup is not enough ","date":"18 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/quickstart/quickstart/","section":"Docs","summary":"Get started with plakar: installation, creating your first backup, verifying, restoring, and using the UI. This guide helps you quickly set up plakar and perform essential backup operations.","title":"Quickstart","type":"docs"},{"content":" Scheduling Tasks # Plakar includes a scheduler that can run backups as well as tasks like restoring files, synchronizing Kloset stores, and verifying backup integrity.\nBackups only protect you if they run regularly. Without automation, it\u0026rsquo;s easy to forget to run a backup. The backup you didn\u0026rsquo;t run is the one you\u0026rsquo;ll wish you had when something goes wrong. The Plakar scheduler lets you define tasks that run automatically at a given interval, so your backups happen consistently on a schedule.\nIn this guide, we will show how to set up the scheduler to run backups every day.\nRequirements # Create a configuration for your Kloset store. This ensures the scheduler can later retrieve the store passphrase:\n$ plakar store add mybackups /var/backups passphrase=mysuperpassphrase Then, create the store referencing the configuration:\n$ plakar at \u0026#34;@mybackups\u0026#34; create Configuration # Create the configuration file scheduler.yaml for the scheduler in your current directory with the following content:\nagent: tasks: - name: backup Plakar source code repository: \u0026#34;@mybackups\u0026#34; backup: path: /Users/niluje/dev/plakar/plakar interval: 24h check: true This configuration file defines a task for the Plakar scheduler, where:\nname is the task label, displayed in the UI. repository refers to the Kloset store. The syntax @mystore corresponds to the store previously configured with plakar store add mystore. backup is the task type. In this case, we back up the Plakar source code at the given path every 24 hours. check runs a verification after the backup is created. Running the scheduler # You can start the scheduler by running:\n$ plakar scheduler start -tasks ./scheduler.yaml The scheduler runs in the background. To stop it, run:\n$ plakar scheduler stop ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/main/guides/setup-scheduler-daily-backups/","section":"Docs","summary":"Learn how to configure and run the Plakar scheduler to automate backups.","title":"Scheduling Tasks","type":"docs"},{"content":" Scheduling Tasks # Plakar includes a scheduler that can run backups as well as tasks like restoring files, synchronizing Kloset stores, and verifying backup integrity.\nBackups only protect you if they run regularly. Without automation, it\u0026rsquo;s easy to forget to run a backup. The backup you didn\u0026rsquo;t run is the one you\u0026rsquo;ll wish you had when something goes wrong. The Plakar scheduler lets you define tasks that run automatically at a given interval, so your backups happen consistently on a schedule.\nIn this guide, we will show how to set up the scheduler to run backups every day.\nRequirements # Create a configuration for your Kloset store. This ensures the scheduler can later retrieve the store passphrase:\n$ plakar store add mybackups /var/backups passphrase=mysuperpassphrase Then, create the store referencing the configuration:\n$ plakar at \u0026#34;@mybackups\u0026#34; create Configuration # Create the configuration file scheduler.yaml for the scheduler in your current directory with the following content:\nagent: tasks: - name: backup Plakar source code repository: \u0026#34;@mybackups\u0026#34; backup: path: /Users/niluje/dev/plakar/plakar interval: 24h check: true This configuration file defines a task for the Plakar scheduler, where:\nname is the task label, displayed in the UI. repository refers to the Kloset store. The syntax @mystore corresponds to the store previously configured with plakar store add mystore. backup is the task type. In this case, we back up the Plakar source code at the given path every 24 hours. check runs a verification after the backup is created. Running the scheduler # You can start the scheduler by running:\n$ plakar scheduler start -tasks ./scheduler.yaml The scheduler runs in the background. To stop it, run:\n$ plakar scheduler stop ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/guides/setup-scheduler-daily-backups/","section":"Docs","summary":"Learn how to configure and run the Plakar scheduler to automate backups.","title":"Scheduling Tasks","type":"docs"},{"content":" Scheduling Tasks # Plakar includes a scheduler that can run backups as well as tasks like restoring files, synchronizing Kloset stores, and verifying backup integrity.\nBackups only protect you if they run regularly. Without automation, it\u0026rsquo;s easy to forget to run a backup. The backup you didn\u0026rsquo;t run is the one you\u0026rsquo;ll wish you had when something goes wrong. The Plakar scheduler lets you define tasks that run automatically at a given interval, so your backups happen consistently on a schedule.\nIn this guide, we will show how to set up the scheduler to run backups every day.\nRequirements # Create a configuration for your Kloset store. This ensures the scheduler can later retrieve the store passphrase:\n$ plakar store add mybackups /var/backups passphrase=mysuperpassphrase Then, create the store referencing the configuration:\n$ plakar at \u0026#34;@mybackups\u0026#34; create Configuration # Create the configuration file scheduler.yaml for the scheduler in your current directory with the following content:\nagent: tasks: - name: backup Plakar source code repository: \u0026#34;@mybackups\u0026#34; backup: path: /Users/niluje/dev/plakar/plakar interval: 24h check: true This configuration file defines a task for the Plakar scheduler, where:\nname is the task label, displayed in the UI. repository refers to the Kloset store. The syntax @mystore corresponds to the store previously configured with plakar store add mystore. backup is the task type. In this case, we back up the Plakar source code at the given path every 24 hours. check runs a verification after the backup is created. Running the scheduler # You can start the scheduler by running:\n$ plakar scheduler start -tasks ./scheduler.yaml The scheduler runs in the background. To stop it, run:\n$ plakar scheduler stop ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/setup-scheduler-daily-backups/","section":"Docs","summary":"Learn how to configure and run the Plakar scheduler to automate backups.","title":"Scheduling Tasks","type":"docs"},{"content":" Scheduling Tasks # Plakar includes a scheduler that can run backups as well as tasks like restoring files, synchronizing Kloset stores, and verifying backup integrity.\nBackups only protect you if they run regularly. Without automation, it\u0026rsquo;s easy to forget to run a backup. The backup you didn\u0026rsquo;t run is the one you\u0026rsquo;ll wish you had when something goes wrong. The Plakar scheduler lets you define tasks that run automatically at a given interval, so your backups happen consistently on a schedule.\nIn this guide, we will show how to set up the scheduler to run backups every day.\nRequirements # Create a configuration for your Kloset store. This ensures the scheduler can later retrieve the store passphrase:\n$ plakar store add mybackups /var/backups passphrase=mysuperpassphrase Then, create the store referencing the configuration:\n$ plakar at \u0026#34;@mybackups\u0026#34; create Configuration # Create the configuration file scheduler.yaml for the scheduler in your current directory with the following content:\nagent: tasks: - name: backup Plakar source code repository: \u0026#34;@mybackups\u0026#34; backup: path: /Users/niluje/dev/plakar/plakar interval: 24h check: true This configuration file defines a task for the Plakar scheduler, where:\nname is the task label, displayed in the UI. repository refers to the Kloset store. The syntax @mystore corresponds to the store previously configured with plakar store add mystore. backup is the task type. In this case, we back up the Plakar source code at the given path every 24 hours. check runs a verification after the backup is created. Running the scheduler # You can start the scheduler by running:\n$ plakar scheduler start -tasks ./scheduler.yaml The scheduler runs in the background. To stop it, run:\n$ plakar scheduler stop ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/setup-scheduler-daily-backups/","section":"Docs","summary":"Learn how to configure and run the Plakar scheduler to automate backups.","title":"Scheduling Tasks","type":"docs"},{"content":" Getting Started # This section provides a quick overview to help you get started with Plakar. Whether you\u0026rsquo;re new to backup solutions or just new to Plakar, these resources will guide you through the initial setup and basic operations.\nOverview A powerful backup tool with deduplication, end-to-end encryption, and flexible integrations for most data sources.\nInstallation Install Plakar and verify your installation.\nQuickstart Get started with plakar: create your first backup, verify integrity, restore, and use the UI.\nSynchronize multiple copies Create a second copy of your Kloset Store to improve the durability of your backups.\nBackup non-filesystem data Create a backup for your non-filesystem data. In this guide, we will back up an S3 bucket but this logic applies to any connector supported by plakar.\nJoin the Community # Discord: Get help, ask questions, and join live discussions. GitHub: Report bugs, request features, and… don\u0026rsquo;t forget to star the repo! ","date":"11 March 2026","externalUrl":null,"permalink":"/docs/main/quickstart/","section":"Docs","summary":"","title":"Getting Started","type":"docs"},{"content":" Getting Started # This section provides a quick overview to help you get started with Plakar. Whether you\u0026rsquo;re new to backup solutions or just new to Plakar, these resources will guide you through the initial setup and basic operations.\nOverview A powerful backup tool with deduplication, end-to-end encryption, and flexible integrations for most data sources.\nInstallation Install Plakar and verify your installation.\nQuickstart Get started with plakar: create your first backup, verify integrity, restore, and use the UI.\nSynchronize multiple copies Create a second copy of your Kloset Store to improve the durability of your backups.\nBackup non-filesystem data Create a backup for your non-filesystem data. In this guide, we will back up an S3 bucket but this logic applies to any connector supported by plakar.\nJoin the Community # Discord: Get help, ask questions, and join live discussions. GitHub: Report bugs, request features, and… don\u0026rsquo;t forget to star the repo! ","date":"11 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/quickstart/","section":"Docs","summary":"","title":"Getting Started","type":"docs"},{"content":" Getting Started # This section provides a quick overview to help you get started with Plakar. Whether you\u0026rsquo;re new to backup solutions or just new to Plakar, these resources will guide you through the initial setup and basic operations.\nOverview A powerful backup tool with deduplication, end-to-end encryption, and flexible integrations for most data sources.\nInstallation Install Plakar and verify your installation.\nQuickstart Get started with plakar: create your first backup, verify integrity, restore, and use the UI.\nSynchronize multiple copies Create a second copy of your Kloset Store to improve the durability of your backups.\nBackup non-filesystem data Create a backup for your non-filesystem data. In this guide, we will back up an S3 bucket but this logic applies to any connector supported by plakar.\nJoin the Community # Discord: Get help, ask questions, and join live discussions. GitHub: Report bugs, request features, and… don\u0026rsquo;t forget to star the repo! ","date":"11 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/quickstart/","section":"Docs","summary":"","title":"Getting Started","type":"docs"},{"content":" Overview # Plakar is a simple, powerful backup tool designed to protect your data without the complexity. Whether you\u0026rsquo;re backing up your laptop or managing infrastructure backups, Plakar makes it straightforward.\nWhat Makes Plakar Different? # Smart Storage: Plakar automatically deduplicates your data, so backing up the same files multiple times doesn\u0026rsquo;t waste space. Real Security: Your backups are encrypted before they leave your system. Not even the storage provider can read your data. Independent Snapshots: Each backup is complete and independent. Delete old snapshots without breaking newer ones, or restore from any point in time. Flexible: Back up from filesystems, databases, cloud services, or remote servers. Restore to any destination you need. Getting Started # Plakar Installation Guide Create your first backup Core Concepts # As you use Plakar, these concepts will help you understand how it works:\nKloset Store - The engine that powers Plakar\u0026rsquo;s storage Integrations - Connecting to different data sources and destinations ","date":"11 March 2026","externalUrl":null,"permalink":"/docs/main/quickstart/overview/","section":"Docs","summary":"A powerful backup tool with deduplication, end-to-end encryption, and flexible integrations for most data sources.","title":"Overview","type":"docs"},{"content":" Overview # Plakar is a simple, powerful backup tool designed to protect your data without the complexity. Whether you\u0026rsquo;re backing up your laptop or managing infrastructure backups, Plakar makes it straightforward.\nWhat Makes Plakar Different? # Smart Storage: Plakar automatically deduplicates your data, so backing up the same files multiple times doesn\u0026rsquo;t waste space. Real Security: Your backups are encrypted before they leave your system. Not even the storage provider can read your data. Independent Snapshots: Each backup is complete and independent. Delete old snapshots without breaking newer ones, or restore from any point in time. Flexible: Back up from filesystems, databases, cloud services, or remote servers. Restore to any destination you need. Getting Started # Plakar Installation Guide Create your first backup Core Concepts # As you use Plakar, these concepts will help you understand how it works:\nKloset Store - The engine that powers Plakar\u0026rsquo;s storage Integrations - Connecting to different data sources and destinations ","date":"11 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/quickstart/overview/","section":"Docs","summary":"A powerful backup tool with deduplication, end-to-end encryption, and flexible integrations for most data sources.","title":"Overview","type":"docs"},{"content":" Overview # Plakar is a simple, powerful backup tool designed to protect your data without the complexity. Whether you\u0026rsquo;re backing up your laptop or managing infrastructure backups, Plakar makes it straightforward.\nWhat Makes Plakar Different? # Smart Storage: Plakar automatically deduplicates your data, so backing up the same files multiple times doesn\u0026rsquo;t waste space. Real Security: Your backups are encrypted before they leave your system. Not even the storage provider can read your data. Independent Snapshots: Each backup is complete and independent. Delete old snapshots without breaking newer ones, or restore from any point in time. Flexible: Back up from filesystems, databases, cloud services, or remote servers. Restore to any destination you need. Getting Started # Plakar Installation Guide Create your first backup Core Concepts # As you use Plakar, these concepts will help you understand how it works:\nKloset Store - The engine that powers Plakar\u0026rsquo;s storage Integrations - Connecting to different data sources and destinations ","date":"11 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/quickstart/overview/","section":"Docs","summary":"A powerful backup tool with deduplication, end-to-end encryption, and flexible integrations for most data sources.","title":"Overview","type":"docs"},{"content":" Plakar: Developer branch # Getting Started Overview Installation Quickstart Synchronize multiple copies Backup non-filesystem data Guides Scheduling Tasks Importing Configurations Creating a Kloset Store Serving a Kloset Store over HTTP Excluding files from a backup Retrieving secrets via external command Creating a custom connector Logging In to Plakar Managing packages Pruning snapshots MySQL PostgreSQL OVHcloud Exoscale Integrations S3 SFTP / SSH Notion Dropbox iCloud Drive Koofr Google Drive OneDrive OpenDrive Proton Drive Proxmox Kubernetes etcd Explanations How Plakar Works Should you push or pull backups How many Kloset Stores should you create Why multiple backup copies matter Why you need to backup your SaaS How Maintenance Works References Plakar Ptar Command line syntax Go Kloset SDK Commands Community ","date":"11 March 2026","externalUrl":null,"permalink":"/docs/main/","section":"Docs","summary":"Plakar documentation hub, find guides, references, and resources for working with Plakar.","title":"Plakar: Developer branch","type":"docs"},{"content":" Infrastructure # Secret Providers How to manage credentials in Plakar Control Plane using secret providers.\n","date":"23 April 2026","externalUrl":null,"permalink":"/control-plane-docs/infrastructure/","section":"Control Plane Docs","summary":"","title":"Infrastructure","type":"control-plane-docs"},{"content":" Enrollment # When you first access your Plakar Control Plane instance, you are taken through a one-time enrollment process. Enrollment registers your appliance with plakar.io to retrieve your license and set up billing reporting. No backup data is ever transferred, only the consumption metrics needed for billing.\n1. Owner email # The first thing you enter is an owner email address. This is the email plakar.io uses for billing, license reporting, and any account-level communication. A verification link is sent to this address, click it, then return to the setup page and continue.\nOwnership can be transferred later if needed.\n2. Organization # Next you create an organization. This is the account that groups your backups, team members, and billing together. Use your company name or team name.\n3. Admin account # You then create an admin account for this specific instance. This is a local account on the appliance, separate from the owner email in step 1. You can use the same email address or a different one.\n4. All set # Once the admin account is created, you are shown a confirmation screen with your organization name, admin details, and the current Plakar Control Plane version. From here you can go straight to the dashboard.\nOffline mode # If you operate in an air-gapped or PCI-DSS environment and cannot allow outbound connections to plakar.io, contact us to discuss offline mode options.\n","date":"20 April 2026","externalUrl":null,"permalink":"/control-plane-docs/intro/enrollment/","section":"Control Plane Docs","summary":"How to enroll your Plakar Control Plane instance on first setup.","title":"Enrollment","type":"control-plane-docs"},{"content":" Back Up an Exoscale Managed MySQL Database # This guide backs up an Exoscale Managed MySQL database using mysqldump streamed through Plakar to Exoscale Object Storage (SOS). The result is an encrypted, deduplicated snapshot stored separately from your database infrastructure.\nArchitecture # flowchart TB subgraph Client[\"Backup Client\"] MySQLDump[\"mysqldump\"] Plakar[\"Plakarstdin integration\"] end subgraph DB[\"Exoscale Managed MySQL\"] MySQL[\"MySQL\"] end subgraph Storage[\"Exoscale Object Storage\"] SOS[\"Kloset Store(Encrypted \u0026 Deduplicated)\"] end MySQL --\u003e|SQL dump| MySQLDump MySQLDump --\u003e|stdin| Plakar Plakar --\u003e|Snapshots| SOS Prerequisites # Exoscale account with billing configured Create MySQL Database # Provision database # Log in to Exoscale Portal Go to DBAAS → Services Click on the button with an ellipsis icon then select Add MySQL Service from the dropdown Configure: Zone: Select location Database name Plan: Select instance size IP Filters, click on Add CIDR and enter your IP address to access the database or use 0.0.0.0/0 to access it from any IP Click Add Download connection details # In the database connection data tab, download your CA Certificates and get the other connection details. In the users tab, save your database user password Install Tools # Install MySQL client:\n$ sudo apt update $ sudo apt install mysql-client Install Plakar using the installation guide.\nConfigure MySQL Connection # Set environment variables from connection details:\n$ export MYSQL_HOST=\u0026lt;DB_HOST\u0026gt; $ export MYSQL_TCP_PORT=21699 $ export MYSQL_USER=\u0026lt;DB_USER\u0026gt; $ export MYSQL_PWD=\u0026lt;DB_PASSWORD\u0026gt; Configure SSL/TLS with CA certificate:\n# Place CA certificate in a secure location $ sudo mkdir -p /etc/mysql/certs $ sudo cp ca.pem /etc/mysql/certs/ $ sudo chmod 644 /etc/mysql/certs/ca.pem Create MySQL configuration file:\n$ cat \u0026gt; ~/.my.cnf \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; [client] ssl-ca=/etc/mysql/certs/ca.pem ssl-mode=REQUIRED EOF $ chmod 600 ~/.my.cnf Test connection:\n$ mysql -e \u0026#34;SELECT VERSION();\u0026#34; Configure Object Storage # Install S3 integration # $ plakar login -email you@example.com $ plakar pkg add s3 Create Object Storage bucket # If not already configured, follow: Exoscale Object Storage setup\nAdd storage connector # $ plakar store add exoscale-sos-mysql \\ location=s3://\u0026lt;SOS_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;ACCESS_KEY\u0026gt; \\ secret_access_key=\u0026lt;SECRET_KEY\u0026gt; \\ use_tls=true Replace:\n\u0026lt;SOS_ENDPOINT\u0026gt;: e.g., sos-ch-dk-2.exo.io \u0026lt;BUCKET_NAME\u0026gt;: e.g., plakar-backups \u0026lt;ACCESS_KEY\u0026gt; and \u0026lt;SECRET_KEY\u0026gt;: From Exoscale IAM Initialize store # $ plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; create Back Up Database # $ mysqldump --single-transaction \\ --routines \\ --triggers \\ --events \\ \u0026lt;DB_NAME\u0026gt; | plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; backup stdin:dump.sql Verify:\n$ plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; ls Restore Database # Retrieve snapshot ID:\n$ plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; ls Restore single database # $ plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; cat \u0026lt;SNAPSHOT_ID\u0026gt;:dump.sql | mysql \u0026lt;DB_NAME\u0026gt; Troubleshooting # Connection refused\nVerify MYSQL_HOST, MYSQL_TCP_PORT, MYSQL_USER, MYSQL_PWD environment variables Check database is running in Exoscale Portal Verify network access/firewall rules Authentication failed\nConfirm user credentials S3 upload errors\nCheck S3 credentials: plakar store show exoscale-sos-mysql Verify endpoint URL and bucket name Confirm bucket exists in Exoscale Portal mysqldump not found\nInstall MySQL client: sudo apt install mysql-client ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/guides/exoscale/backup-exoscale-managed-mysql/","section":"Docs","summary":"Back up an Exoscale Managed MySQL database to Exoscale Object Storage using mysqldump and Plakar","title":"Back Up an Exoscale Managed MySQL Database","type":"docs"},{"content":" Back Up an Exoscale Managed MySQL Database # This guide backs up an Exoscale Managed MySQL database using mysqldump streamed through Plakar to Exoscale Object Storage (SOS). The result is an encrypted, deduplicated snapshot stored separately from your database infrastructure.\nArchitecture # flowchart TB subgraph Client[\"Backup Client\"] MySQLDump[\"mysqldump\"] Plakar[\"Plakarstdin integration\"] end subgraph DB[\"Exoscale Managed MySQL\"] MySQL[\"MySQL\"] end subgraph Storage[\"Exoscale Object Storage\"] SOS[\"Kloset Store(Encrypted \u0026 Deduplicated)\"] end MySQL --\u003e|SQL dump| MySQLDump MySQLDump --\u003e|stdin| Plakar Plakar --\u003e|Snapshots| SOS Prerequisites # Exoscale account with billing configured Create MySQL Database # Provision database # Log in to Exoscale Portal Go to DBAAS → Services Click on the button with an ellipsis icon then select Add MySQL Service from the dropdown Configure: Zone: Select location Database name Plan: Select instance size IP Filters, click on Add CIDR and enter your IP address to access the database or use 0.0.0.0/0 to access it from any IP Click Add Download connection details # In the database connection data tab, download your CA Certificates and get the other connection details. In the users tab, save your database user password Install Tools # Install MySQL client:\n$ sudo apt update $ sudo apt install mysql-client Install Plakar using the installation guide.\nConfigure MySQL Connection # Set environment variables from connection details:\n$ export MYSQL_HOST=\u0026lt;DB_HOST\u0026gt; $ export MYSQL_TCP_PORT=21699 $ export MYSQL_USER=\u0026lt;DB_USER\u0026gt; $ export MYSQL_PWD=\u0026lt;DB_PASSWORD\u0026gt; Configure SSL/TLS with CA certificate:\n# Place CA certificate in a secure location $ sudo mkdir -p /etc/mysql/certs $ sudo cp ca.pem /etc/mysql/certs/ $ sudo chmod 644 /etc/mysql/certs/ca.pem Create MySQL configuration file:\n$ cat \u0026gt; ~/.my.cnf \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; [client] ssl-ca=/etc/mysql/certs/ca.pem ssl-mode=REQUIRED EOF $ chmod 600 ~/.my.cnf Test connection:\n$ mysql -e \u0026#34;SELECT VERSION();\u0026#34; Configure Object Storage # Install S3 integration # $ plakar login -email you@example.com $ plakar pkg add s3 Create Object Storage bucket # If not already configured, follow: Exoscale Object Storage setup\nAdd storage connector # $ plakar store add exoscale-sos-mysql \\ location=s3://\u0026lt;SOS_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;ACCESS_KEY\u0026gt; \\ secret_access_key=\u0026lt;SECRET_KEY\u0026gt; \\ use_tls=true Replace:\n\u0026lt;SOS_ENDPOINT\u0026gt;: e.g., sos-ch-dk-2.exo.io \u0026lt;BUCKET_NAME\u0026gt;: e.g., plakar-backups \u0026lt;ACCESS_KEY\u0026gt; and \u0026lt;SECRET_KEY\u0026gt;: From Exoscale IAM Initialize store # $ plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; create Back Up Database # $ mysqldump --single-transaction \\ --routines \\ --triggers \\ --events \\ \u0026lt;DB_NAME\u0026gt; | plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; backup stdin:dump.sql Verify:\n$ plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; ls Restore Database # Retrieve snapshot ID:\n$ plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; ls Restore single database # $ plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; cat \u0026lt;SNAPSHOT_ID\u0026gt;:dump.sql | mysql \u0026lt;DB_NAME\u0026gt; Troubleshooting # Connection refused\nVerify MYSQL_HOST, MYSQL_TCP_PORT, MYSQL_USER, MYSQL_PWD environment variables Check database is running in Exoscale Portal Verify network access/firewall rules Authentication failed\nConfirm user credentials S3 upload errors\nCheck S3 credentials: plakar store show exoscale-sos-mysql Verify endpoint URL and bucket name Confirm bucket exists in Exoscale Portal mysqldump not found\nInstall MySQL client: sudo apt install mysql-client ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/exoscale/backup-exoscale-managed-mysql/","section":"Docs","summary":"Back up an Exoscale Managed MySQL database to Exoscale Object Storage using mysqldump and Plakar","title":"Back Up an Exoscale Managed MySQL Database","type":"docs"},{"content":" Back Up an Exoscale Managed MySQL Database # This guide backs up an Exoscale Managed MySQL database using mysqldump streamed through Plakar to Exoscale Object Storage (SOS). The result is an encrypted, deduplicated snapshot stored separately from your database infrastructure.\nArchitecture # flowchart TB subgraph Client[\"Backup Client\"] MySQLDump[\"mysqldump\"] Plakar[\"Plakarstdin integration\"] end subgraph DB[\"Exoscale Managed MySQL\"] MySQL[\"MySQL\"] end subgraph Storage[\"Exoscale Object Storage\"] SOS[\"Kloset Store(Encrypted \u0026 Deduplicated)\"] end MySQL --\u003e|SQL dump| MySQLDump MySQLDump --\u003e|stdin| Plakar Plakar --\u003e|Snapshots| SOS Prerequisites # Exoscale account with billing configured Create MySQL Database # Provision database # Log in to Exoscale Portal Go to DBAAS → Services Click on the button with an ellipsis icon then select Add MySQL Service from the dropdown Configure: Zone: Select location Database name Plan: Select instance size IP Filters, click on Add CIDR and enter your IP address to access the database or use 0.0.0.0/0 to access it from any IP Click Add Download connection details # In the database connection data tab, download your CA Certificates and get the other connection details. In the users tab, save your database user password Install Tools # Install MySQL client:\n$ sudo apt update $ sudo apt install mysql-client Install Plakar using the installation guide.\nConfigure MySQL Connection # Set environment variables from connection details:\n$ export MYSQL_HOST=\u0026lt;DB_HOST\u0026gt; $ export MYSQL_TCP_PORT=21699 $ export MYSQL_USER=\u0026lt;DB_USER\u0026gt; $ export MYSQL_PWD=\u0026lt;DB_PASSWORD\u0026gt; Configure SSL/TLS with CA certificate:\n# Place CA certificate in a secure location $ sudo mkdir -p /etc/mysql/certs $ sudo cp ca.pem /etc/mysql/certs/ $ sudo chmod 644 /etc/mysql/certs/ca.pem Create MySQL configuration file:\n$ cat \u0026gt; ~/.my.cnf \u0026lt;\u0026lt; \u0026#39;EOF\u0026#39; [client] ssl-ca=/etc/mysql/certs/ca.pem ssl-mode=REQUIRED EOF $ chmod 600 ~/.my.cnf Test connection:\n$ mysql -e \u0026#34;SELECT VERSION();\u0026#34; Configure Object Storage # Install S3 integration # $ plakar login -email you@example.com $ plakar pkg add s3 Create Object Storage bucket # If not already configured, follow: Exoscale Object Storage setup\nAdd storage connector # $ plakar store add exoscale-sos-mysql \\ location=s3://\u0026lt;SOS_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;ACCESS_KEY\u0026gt; \\ secret_access_key=\u0026lt;SECRET_KEY\u0026gt; \\ use_tls=true Replace:\n\u0026lt;SOS_ENDPOINT\u0026gt;: e.g., sos-ch-dk-2.exo.io \u0026lt;BUCKET_NAME\u0026gt;: e.g., plakar-backups \u0026lt;ACCESS_KEY\u0026gt; and \u0026lt;SECRET_KEY\u0026gt;: From Exoscale IAM Initialize store # $ plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; create Back Up Database # $ mysqldump --single-transaction \\ --routines \\ --triggers \\ --events \\ \u0026lt;DB_NAME\u0026gt; | plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; backup stdin:dump.sql Verify:\n$ plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; ls Restore Database # Retrieve snapshot ID:\n$ plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; ls Restore single database # $ plakar at \u0026#34;@exoscale-sos-mysql\u0026#34; cat \u0026lt;SNAPSHOT_ID\u0026gt;:dump.sql | mysql \u0026lt;DB_NAME\u0026gt; Troubleshooting # Connection refused\nVerify MYSQL_HOST, MYSQL_TCP_PORT, MYSQL_USER, MYSQL_PWD environment variables Check database is running in Exoscale Portal Verify network access/firewall rules Authentication failed\nConfirm user credentials S3 upload errors\nCheck S3 credentials: plakar store show exoscale-sos-mysql Verify endpoint URL and bucket name Confirm bucket exists in Exoscale Portal mysqldump not found\nInstall MySQL client: sudo apt install mysql-client ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/exoscale/backup-exoscale-managed-mysql/","section":"Docs","summary":"Back up an Exoscale Managed MySQL database to Exoscale Object Storage using mysqldump and Plakar","title":"Back Up an Exoscale Managed MySQL Database","type":"docs"},{"content":" Backing Up an OVHcloud Managed PostgreSQL Database # This guide backs up an OVHcloud Managed PostgreSQL database using pg_dump streamed through Plakar to OVHcloud Object Storage. The result is an encrypted, deduplicated snapshot stored separately from your database infrastructure.\nArchitecture # flowchart TB subgraph Client[\"Backup Client\"] PGDump[\"pg_dump\"] Plakar[\"Plakarstdin integration\"] end subgraph DB[\"OVHcloud Managed PostgreSQL\"] Postgres[\"PostgreSQL\"] end subgraph Storage[\"OVHcloud S3 Object Storage\"] S3[\"Kloset Store(Encrypted \u0026 Deduplicated)\"] end Postgres --\u003e|SQL dump| PGDump PGDump --\u003e|stdin| Plakar Plakar --\u003e|Snapshots| S3 Prerequisites # OVHcloud account with billing configured Plakar installed on backup client PostgreSQL client tools (pg_dump) OVHcloud Object Storage bucket configured Create PostgreSQL Database # Provision database # Log in to OVHcloud Control Panel Go to Public Cloud → Databases \u0026amp; Analytics → Databases Click Create a service Configure: Database name Engine: PostgreSQL Version: 14-18 (OVHcloud supported) Instance: Select vCores, memory, storage Network: Public network Click Order Create backup user # Open PostgreSQL database in dashboard Go to Users tab Click Add user Configure: Username: backup_user Role: replication Save connection string Install Tools # Install PostgreSQL client:\n$ sudo apt update $ sudo apt install postgresql-client Install Plakar using the installation guide.\nConfigure PostgreSQL Connection # Set environment variables from connection string:\n$ export PGHOST=\u0026lt;DB_HOST\u0026gt; $ export PGPORT=5432 $ export PGUSER=\u0026lt;DB_USER\u0026gt; $ export PGPASSWORD=\u0026lt;DB_PASSWORD\u0026gt; Test connection:\n$ psql -X \u0026lt;DB_NAME\u0026gt; Exit with \\q.\nConfigure Object Storage # Install S3 integration # $ plakar login -email you@example.com $ plakar pkg add s3 Create Object Storage bucket # If not already configured, follow: OVHcloud Object Storage setup\nAdd Kloset store # $ plakar store add ovhcloud-s3-postgres \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;ACCESS_KEY\u0026gt; \\ secret_access_key=\u0026lt;SECRET_KEY\u0026gt; \\ use_tls=true Replace:\n\u0026lt;S3_ENDPOINT\u0026gt;: e.g., s3.eu-west-par.io.cloud.ovh.net \u0026lt;BUCKET_NAME\u0026gt;: e.g., plakar-backups \u0026lt;ACCESS_KEY\u0026gt; and \u0026lt;SECRET_KEY\u0026gt;: From OVHcloud Control Panel Initialize store # $ plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; create Back Up Database # Run backup:\n$ pg_dump \u0026lt;DB_NAME\u0026gt; | plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; backup stdin:dump.sql Verify:\n$ plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; ls Restore Database # Retrieve snapshot ID:\n$ plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; ls Restore:\n$ plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; cat \u0026lt;SNAPSHOT_ID\u0026gt;:dump.sql | psql \u0026lt;DB_NAME\u0026gt; Automate Backups # Create cron job for daily backups:\n$ crontab -e Add:\n0 2 * * * pg_dump \u0026lt;DB_NAME\u0026gt; | plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; backup stdin:dump-$(date +\\%Y\\%m\\%d).sql Troubleshooting # Connection refused\nVerify PGHOST, PGPORT, PGUSER, PGPASSWORD environment variables Check network access to managed database Authentication failed\nConfirm backup user has replication role Verify password in connection string S3 upload errors\nCheck S3 credentials: plakar store show ovhcloud-s3-postgres Verify endpoint URL and bucket name Confirm bucket exists in OVHcloud dashboard pg_dump not found\nInstall PostgreSQL client: sudo apt install postgresql-client ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/guides/ovhcloud/backup-ovhcloud-managed-postgres/","section":"Docs","summary":"Backing up an OVHcloud Managed PostgreSQL database to Object Storage using pg_dump and Plakar.","title":"Backing Up an OVHcloud Managed PostgreSQL Database","type":"docs"},{"content":" Backing Up an OVHcloud Managed PostgreSQL Database # This guide backs up an OVHcloud Managed PostgreSQL database using pg_dump streamed through Plakar to OVHcloud Object Storage. The result is an encrypted, deduplicated snapshot stored separately from your database infrastructure.\nArchitecture # flowchart TB subgraph Client[\"Backup Client\"] PGDump[\"pg_dump\"] Plakar[\"Plakarstdin integration\"] end subgraph DB[\"OVHcloud Managed PostgreSQL\"] Postgres[\"PostgreSQL\"] end subgraph Storage[\"OVHcloud S3 Object Storage\"] S3[\"Kloset Store(Encrypted \u0026 Deduplicated)\"] end Postgres --\u003e|SQL dump| PGDump PGDump --\u003e|stdin| Plakar Plakar --\u003e|Snapshots| S3 Prerequisites # OVHcloud account with billing configured Plakar installed on backup client PostgreSQL client tools (pg_dump) OVHcloud Object Storage bucket configured Create PostgreSQL Database # Provision database # Log in to OVHcloud Control Panel Go to Public Cloud → Databases \u0026amp; Analytics → Databases Click Create a service Configure: Database name Engine: PostgreSQL Version: 14-18 (OVHcloud supported) Instance: Select vCores, memory, storage Network: Public network Click Order Create backup user # Open PostgreSQL database in dashboard Go to Users tab Click Add user Configure: Username: backup_user Role: replication Save connection string Install Tools # Install PostgreSQL client:\n$ sudo apt update $ sudo apt install postgresql-client Install Plakar using the installation guide.\nConfigure PostgreSQL Connection # Set environment variables from connection string:\n$ export PGHOST=\u0026lt;DB_HOST\u0026gt; $ export PGPORT=5432 $ export PGUSER=\u0026lt;DB_USER\u0026gt; $ export PGPASSWORD=\u0026lt;DB_PASSWORD\u0026gt; Test connection:\n$ psql -X \u0026lt;DB_NAME\u0026gt; Exit with \\q.\nConfigure Object Storage # Install S3 integration # $ plakar login -email you@example.com $ plakar pkg add s3 Create Object Storage bucket # If not already configured, follow: OVHcloud Object Storage setup\nAdd Kloset store # $ plakar store add ovhcloud-s3-postgres \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;ACCESS_KEY\u0026gt; \\ secret_access_key=\u0026lt;SECRET_KEY\u0026gt; \\ use_tls=true Replace:\n\u0026lt;S3_ENDPOINT\u0026gt;: e.g., s3.eu-west-par.io.cloud.ovh.net \u0026lt;BUCKET_NAME\u0026gt;: e.g., plakar-backups \u0026lt;ACCESS_KEY\u0026gt; and \u0026lt;SECRET_KEY\u0026gt;: From OVHcloud Control Panel Initialize store # $ plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; create Back Up Database # Run backup:\n$ pg_dump \u0026lt;DB_NAME\u0026gt; | plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; backup stdin:dump.sql Verify:\n$ plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; ls Restore Database # Retrieve snapshot ID:\n$ plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; ls Restore:\n$ plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; cat \u0026lt;SNAPSHOT_ID\u0026gt;:dump.sql | psql \u0026lt;DB_NAME\u0026gt; Automate Backups # Create cron job for daily backups:\n$ crontab -e Add:\n0 2 * * * pg_dump \u0026lt;DB_NAME\u0026gt; | plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; backup stdin:dump-$(date +\\%Y\\%m\\%d).sql Troubleshooting # Connection refused\nVerify PGHOST, PGPORT, PGUSER, PGPASSWORD environment variables Check network access to managed database Authentication failed\nConfirm backup user has replication role Verify password in connection string S3 upload errors\nCheck S3 credentials: plakar store show ovhcloud-s3-postgres Verify endpoint URL and bucket name Confirm bucket exists in OVHcloud dashboard pg_dump not found\nInstall PostgreSQL client: sudo apt install postgresql-client ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/ovhcloud/backup-ovhcloud-managed-postgres/","section":"Docs","summary":"Backing up an OVHcloud Managed PostgreSQL database to Object Storage using pg_dump and Plakar.","title":"Backing Up an OVHcloud Managed PostgreSQL Database","type":"docs"},{"content":" Backing Up an OVHcloud Managed PostgreSQL Database # This guide backs up an OVHcloud Managed PostgreSQL database using pg_dump streamed through Plakar to OVHcloud Object Storage. The result is an encrypted, deduplicated snapshot stored separately from your database infrastructure.\nArchitecture # flowchart TB subgraph Client[\"Backup Client\"] PGDump[\"pg_dump\"] Plakar[\"Plakarstdin integration\"] end subgraph DB[\"OVHcloud Managed PostgreSQL\"] Postgres[\"PostgreSQL\"] end subgraph Storage[\"OVHcloud S3 Object Storage\"] S3[\"Kloset Store(Encrypted \u0026 Deduplicated)\"] end Postgres --\u003e|SQL dump| PGDump PGDump --\u003e|stdin| Plakar Plakar --\u003e|Snapshots| S3 Prerequisites # OVHcloud account with billing configured Plakar installed on backup client PostgreSQL client tools (pg_dump) OVHcloud Object Storage bucket configured Create PostgreSQL Database # Provision database # Log in to OVHcloud Control Panel Go to Public Cloud → Databases \u0026amp; Analytics → Databases Click Create a service Configure: Database name Engine: PostgreSQL Version: 14-18 (OVHcloud supported) Instance: Select vCores, memory, storage Network: Public network Click Order Create backup user # Open PostgreSQL database in dashboard Go to Users tab Click Add user Configure: Username: backup_user Role: replication Save connection string Install Tools # Install PostgreSQL client:\n$ sudo apt update $ sudo apt install postgresql-client Install Plakar using the installation guide.\nConfigure PostgreSQL Connection # Set environment variables from connection string:\n$ export PGHOST=\u0026lt;DB_HOST\u0026gt; $ export PGPORT=5432 $ export PGUSER=\u0026lt;DB_USER\u0026gt; $ export PGPASSWORD=\u0026lt;DB_PASSWORD\u0026gt; Test connection:\n$ psql -X \u0026lt;DB_NAME\u0026gt; Exit with \\q.\nConfigure Object Storage # Install S3 integration # $ plakar login -email you@example.com $ plakar pkg add s3 Create Object Storage bucket # If not already configured, follow: OVHcloud Object Storage setup\nAdd Kloset store # $ plakar store add ovhcloud-s3-postgres \\ location=s3://\u0026lt;S3_ENDPOINT\u0026gt;/\u0026lt;BUCKET_NAME\u0026gt; \\ access_key=\u0026lt;ACCESS_KEY\u0026gt; \\ secret_access_key=\u0026lt;SECRET_KEY\u0026gt; \\ use_tls=true Replace:\n\u0026lt;S3_ENDPOINT\u0026gt;: e.g., s3.eu-west-par.io.cloud.ovh.net \u0026lt;BUCKET_NAME\u0026gt;: e.g., plakar-backups \u0026lt;ACCESS_KEY\u0026gt; and \u0026lt;SECRET_KEY\u0026gt;: From OVHcloud Control Panel Initialize store # $ plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; create Back Up Database # Run backup:\n$ pg_dump \u0026lt;DB_NAME\u0026gt; | plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; backup stdin:dump.sql Verify:\n$ plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; ls Restore Database # Retrieve snapshot ID:\n$ plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; ls Restore:\n$ plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; cat \u0026lt;SNAPSHOT_ID\u0026gt;:dump.sql | psql \u0026lt;DB_NAME\u0026gt; Automate Backups # Create cron job for daily backups:\n$ crontab -e Add:\n0 2 * * * pg_dump \u0026lt;DB_NAME\u0026gt; | plakar at \u0026#34;@ovhcloud-s3-postgres\u0026#34; backup stdin:dump-$(date +\\%Y\\%m\\%d).sql Troubleshooting # Connection refused\nVerify PGHOST, PGPORT, PGUSER, PGPASSWORD environment variables Check network access to managed database Authentication failed\nConfirm backup user has replication role Verify password in connection string S3 upload errors\nCheck S3 credentials: plakar store show ovhcloud-s3-postgres Verify endpoint URL and bucket name Confirm bucket exists in OVHcloud dashboard pg_dump not found\nInstall PostgreSQL client: sudo apt install postgresql-client ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/ovhcloud/backup-ovhcloud-managed-postgres/","section":"Docs","summary":"Backing up an OVHcloud Managed PostgreSQL database to Object Storage using pg_dump and Plakar.","title":"Backing Up an OVHcloud Managed PostgreSQL Database","type":"docs"},{"content":" Command line syntax # Every Plakar invocation follows this pattern:\n$ plakar [OPTIONS] [at REPOSITORY] COMMAND [COMMAND_OPTIONS]... Component Required Description OPTIONS No Global options that apply to all commands (see below) at REPOSITORY No Target repository; defaults to $PLAKAR_REPOSITORY or ~/.plakar if omitted COMMAND Yes The operation to perform (e.g. backup, restore, check) COMMAND_OPTIONS No Options and arguments specific to the command (documented under each command reference) A few examples to make the structure concrete:\n# Simplest form: just a command $ plakar version # Operating on a repository $ plakar at /backup ls # Global option + repository + command + command options $ plakar -time at /backup ls -tag daily-backups Global options # Global options appear before the at clause and apply to every command. Options that come after the command are command-specific and are documented in each command reference page.\nOption Description -concurrency int Limit the number of concurrent operations (default: -1) -config string Configuration directory (default: ~/.config/plakar) -cpu int Limit the number of usable CPU cores -disable-security-check Disable update check -enable-security-check Enable update check -keyfile string Use passphrase from key file when prompted -profile-cpu string Profile CPU usage -profile-mem string Profile memory usage -quiet No output except errors -silent No output at all -stdio Use stdio user interface -time Display command execution time -trace string Display trace logs, comma-separated (all, trace, repository, snapshot, server) Option order matters # Options must appear in the correct position. Global options go before at, command options go after the command.\n# Correct: -tag is a command option for ls $ plakar -time at /backup ls -tag daily-backups # Wrong: -tag is placed before the command, plakar sees it as the command name $ plakar -time at /backup -tag daily-backups ls # command not found: -tag A misplaced option will either be ignored or cause an error. When something doesn\u0026rsquo;t work as expected, check option placement first.\nGetting help # Plakar has built-in help at every level.\n# Show global usage, all options and available commands $ plakar -h $ plakar help # Show the manual page for a specific command $ plakar help \u0026lt;command\u0026gt; The built-in help is always in sync with the version of Plakar you have installed, making it the most reliable reference for available options and commands.\nEnvironment variables # Variable Description PLAKAR_PASSPHRASE Supply the encryption passphrase non-interactively PLAKAR_REPOSITORY Set the default repository path PLAKAR_PASSPHRASE # When creating or opening an encrypted repository, Plakar prompts for a passphrase. Setting PLAKAR_PASSPHRASE provides it automatically, which is useful in scripts, CI pipelines, or any non-interactive context where a terminal prompt isn\u0026rsquo;t available.\nPLAKAR_REPOSITORY # Sets the default repository location so you don\u0026rsquo;t need to specify at REPOSITORY on every command. When omitted and no at clause is provided, Plakar falls back to ~/.plakar.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/references/command-line-syntax/","section":"Docs","summary":"How Plakar commands are structured, why flag order matters, and how to get help from the CLI.","title":"Command line syntax","type":"docs"},{"content":" Command line syntax # Every Plakar invocation follows this pattern:\n$ plakar [OPTIONS] [at REPOSITORY] COMMAND [COMMAND_OPTIONS]... Component Required Description OPTIONS No Global options that apply to all commands (see below) at REPOSITORY No Target repository; defaults to $PLAKAR_REPOSITORY or ~/.plakar if omitted COMMAND Yes The operation to perform (e.g. backup, restore, check) COMMAND_OPTIONS No Options and arguments specific to the command (documented under each command reference) A few examples to make the structure concrete:\n# Simplest form: just a command $ plakar version # Operating on a repository $ plakar at /backup ls # Global option + repository + command + command options $ plakar -time at /backup ls -tag daily-backups Global options # Global options appear before the at clause and apply to every command. Options that come after the command are command-specific and are documented in each command reference page.\nOption Description -concurrency int Limit the number of concurrent operations (default: -1) -config string Configuration directory (default: ~/.config/plakar) -cpu int Limit the number of usable CPU cores -disable-security-check Disable update check -enable-security-check Enable update check -keyfile string Use passphrase from key file when prompted -profile-cpu string Profile CPU usage -profile-mem string Profile memory usage -quiet No output except errors -silent No output at all -stdio Use stdio user interface -time Display command execution time -trace string Display trace logs, comma-separated (all, trace, repository, snapshot, server) Option order matters # Options must appear in the correct position. Global options go before at, command options go after the command.\n# Correct: -tag is a command option for ls $ plakar -time at /backup ls -tag daily-backups # Wrong: -tag is placed before the command, plakar sees it as the command name $ plakar -time at /backup -tag daily-backups ls # command not found: -tag A misplaced option will either be ignored or cause an error. When something doesn\u0026rsquo;t work as expected, check option placement first.\nGetting help # Plakar has built-in help at every level.\n# Show global usage, all options and available commands $ plakar -h $ plakar help # Show the manual page for a specific command $ plakar help \u0026lt;command\u0026gt; The built-in help is always in sync with the version of Plakar you have installed, making it the most reliable reference for available options and commands.\nEnvironment variables # Variable Description PLAKAR_PASSPHRASE Supply the encryption passphrase non-interactively PLAKAR_REPOSITORY Set the default repository path PLAKAR_PASSPHRASE # When creating or opening an encrypted repository, Plakar prompts for a passphrase. Setting PLAKAR_PASSPHRASE provides it automatically, which is useful in scripts, CI pipelines, or any non-interactive context where a terminal prompt isn\u0026rsquo;t available.\nPLAKAR_REPOSITORY # Sets the default repository location so you don\u0026rsquo;t need to specify at REPOSITORY on every command. When omitted and no at clause is provided, Plakar falls back to ~/.plakar.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/command-line-syntax/","section":"Docs","summary":"How Plakar commands are structured, why flag order matters, and how to get help from the CLI.","title":"Command line syntax","type":"docs"},{"content":" Command line syntax # Every Plakar invocation follows this pattern:\n$ plakar [OPTIONS] [at REPOSITORY] COMMAND [COMMAND_OPTIONS]... Component Required Description OPTIONS No Global options that apply to all commands (see below) at REPOSITORY No Target repository; defaults to $PLAKAR_REPOSITORY or ~/.plakar if omitted COMMAND Yes The operation to perform (e.g. backup, restore, check) COMMAND_OPTIONS No Options and arguments specific to the command (documented under each command reference) A few examples to make the structure concrete:\n# Simplest form: just a command $ plakar version # Operating on a repository $ plakar at /backup ls # Global option + repository + command + command options $ plakar -time at /backup ls -tag daily-backups Global options # Global options appear before the at clause and apply to every command. Options that come after the command are command-specific and are documented in each command reference page.\nOption Description -concurrency int Limit the number of concurrent operations (default: -1) -config string Configuration directory (default: ~/.config/plakar) -cpu int Limit the number of usable CPU cores -disable-security-check Disable update check -enable-security-check Enable update check -keyfile string Use passphrase from key file when prompted -profile-cpu string Profile CPU usage -profile-mem string Profile memory usage -quiet No output except errors -silent No output at all -stdio Use stdio user interface -time Display command execution time -trace string Display trace logs, comma-separated (all, trace, repository, snapshot, server) Option order matters # Options must appear in the correct position. Global options go before at, command options go after the command.\n# Correct: -tag is a command option for ls $ plakar -time at /backup ls -tag daily-backups # Wrong: -tag is placed before the command, plakar sees it as the command name $ plakar -time at /backup -tag daily-backups ls # command not found: -tag A misplaced option will either be ignored or cause an error. When something doesn\u0026rsquo;t work as expected, check option placement first.\nGetting help # Plakar has built-in help at every level.\n# Show global usage, all options and available commands $ plakar -h $ plakar help # Show the manual page for a specific command $ plakar help \u0026lt;command\u0026gt; The built-in help is always in sync with the version of Plakar you have installed, making it the most reliable reference for available options and commands.\nEnvironment variables # Variable Description PLAKAR_PASSPHRASE Supply the encryption passphrase non-interactively PLAKAR_REPOSITORY Set the default repository path PLAKAR_PASSPHRASE # When creating or opening an encrypted repository, Plakar prompts for a passphrase. Setting PLAKAR_PASSPHRASE provides it automatically, which is useful in scripts, CI pipelines, or any non-interactive context where a terminal prompt isn\u0026rsquo;t available.\nPLAKAR_REPOSITORY # Sets the default repository location so you don\u0026rsquo;t need to specify at REPOSITORY on every command. When omitted and no at clause is provided, Plakar falls back to ~/.plakar.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/command-line-syntax/","section":"Docs","summary":"How Plakar commands are structured, why flag order matters, and how to get help from the CLI.","title":"Command line syntax","type":"docs"},{"content":" Physical backups with pg_basebackup # The Plakar PostgreSQL integration uses pg_basebackup to perform physical backups of a PostgreSQL cluster. A physical backup captures the entire data directory (all databases, configuration files, and WAL segments) and stores each file as an individual record in the snapshot.\nPhysical backups are faster to restore than logical dumps but are version-locked: the backup must be restored with the same PostgreSQL major version that produced it. Selective restore of a single database or table is not possible.\nFor a deeper understanding of physical backups and base backups, refer to the official PostgreSQL documentation on pg_basebackup.\nRequirements # A running PostgreSQL server with wal_level = replica or higher in postgresql.conf. A PostgreSQL user with the REPLICATION privilege, or a superuser. pg_hba.conf allowing a replication connection from the backup host. pg_basebackup available in $PATH. Install the package # $ plakar pkg add postgresql pg_hba.conf configuration # Ensure pg_hba.conf includes an entry allowing replication connections from the backup host. For example, to allow local replication without a password:\n# TYPE DATABASE USER ADDRESS METHOD local replication all trust Adapt this to your environment and restart PostgreSQL after making changes.\nWhat gets stored in a snapshot # Each file from the PostgreSQL data directory is stored as an individual record in the snapshot, preserving paths, permissions, and timestamps. A subpath cannot be specified in the URI — pg_basebackup always backs up the entire cluster.\nA /manifest.json record is also written before the backup data, containing cluster-level metadata. See Snapshot manifest below.\nBack up the cluster # $ plakar source add mypg postgres+bin://replicator:secret@db.example.com $ plakar at /var/backups backup @mypg Restore the cluster # There is no dedicated destination connector for physical backups. Because the snapshot contains plain files, restore them to a local directory using the standard filesystem restore:\n$ plakar at /var/backups restore -to ./pgdata \u0026lt;snapshot_id\u0026gt; Then start PostgreSQL against the restored directory:\n$ docker run --rm \\ -v \u0026#34;$PWD/pgdata:/var/lib/postgresql/data\u0026#34; \\ postgres:\u0026lt;version\u0026gt; Replace \u0026lt;version\u0026gt; with the same major PostgreSQL version that was running when the backup was taken.\nTo restore directly to a remote host, use an SFTP destination:\n$ plakar restore -to sftp://user@host/var/lib/postgresql/data \u0026lt;snapshot_id\u0026gt; # then on the remote host: pg_ctl -D /var/lib/postgresql/data start List snapshots # $ plakar at /var/backups ls Source options # Parameter Default Description location — Connection URI: postgres+bin://[user[:password]@]host[:port]. A subpath is not allowed. host localhost Server hostname. Overrides the URI host. port 5432 Server port. Overrides the URI port. username — PostgreSQL replication username. Overrides the URI user. password — PostgreSQL password. Overrides the URI password. pg_bin_dir — Directory containing the PostgreSQL client binaries (pg_basebackup, psql). When omitted, binaries are resolved via $PATH. Useful when multiple PostgreSQL versions are installed. ssl_mode prefer SSL mode: disable, allow, prefer, require, verify-ca, or verify-full. ssl_cert — Path to the client SSL certificate file (PEM). ssl_key — Path to the client SSL private key file (PEM). ssl_root_cert — Path to the root CA certificate used to verify the server (PEM). Snapshot manifest # Every snapshot includes a /manifest.json record written before the backup data. It captures the cluster state at the time of backup, including the same fields as logical backups (server version, roles, tablespaces, databases, and relation details) plus pg_basebackup_version.\nMetadata collection is best-effort: if a query fails, the affected field is omitted and the backup continues.\nConsiderations # Version compatibility # Physical backups must be restored with the same PostgreSQL major version. For cross-version restores, use a logical backup instead.\nServer must be stopped before restore # Do not restore into a data directory that is in use by a running PostgreSQL instance. Stop the server first, or restore to a fresh directory.\nRead-only mounts # Plakar supports mounting a Kloset store as a read-only FUSE filesystem. PostgreSQL requires read-write access to its data directory, so it cannot run directly from a read-only mount. Always restore to a writable directory first.\nKloset store location # The examples above use /var/backups as the Kloset store. Any supported store backend can be used instead. See Create a Kloset store for details.\nSee also # PostgreSQL integration on GitHub ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/guides/postgres/pg-base-backup/","section":"Docs","summary":"Back up a PostgreSQL cluster using the Plakar PostgreSQL integration and restore it.","title":"Physical backups with pg_basebackup","type":"docs"},{"content":" Physical backups with pg_basebackup # pg_basebackup is the standard PostgreSQL utility used to perform physical backups of a PostgreSQL cluster. Unlike SQL dumps, a physical backup captures the entire data directory, allowing you to restore a PostgreSQL instance exactly as it was at backup time, including configuration files and internal state.\nFor a deeper understanding of physical backups, replication, and base backups, refer to the official PostgreSQL documentation on pg_basebackup.\nRequirements # This guide assumes that you have:\nA running PostgreSQL server configured to allow base backups. A PostgreSQL role with REPLICATION privileges. The environment variables PGHOST, PGPORT, PGUSER, and PGPASSWORD set to connect to your PostgreSQL server. pg_basebackup available on the system where the backup is performed. pg_hba.conf configuration # Since pg_basebackup requires replication connections, ensure that your pg_hba.conf file includes an entry allowing the backup user to connect for replication. For example:\n# TYPE DATABASE USER ADDRESS METHOD local replication all trust This entry allows local replication connections without a password and should only be used in trusted environments.\nAdapt the configuration to your needs and restart PostgreSQL after making changes.\nPerforming the backup # Directory-based backup # pg_basebackup expects a directory where the database will be copied. This directory can be stored with Plakar, like any other directory.\nRun the following commands:\n$ export PGUSER=xxx $ export PGPORT=5432 $ export PGHOST=xxx $ export PGPASSWORD=xxx $ pg_basebackup -D ./database $ plakar at /var/backups backup ./database $ rm -rf ./database This sequence of commands:\nExports the necessary environment variables to connect to the PostgreSQL server. Runs pg_basebackup, storing the backup in a local directory named ./database. Uses Plakar to back up the ./database directory into the Kloset store at /var/backups. Removes the local backup directory to free up space. With this method, WAL files are fetched as needed during the base backup process.\nTar-based backup # Alternatively, pg_basebackup can create a tarball. This tarball can be backed up using the tar source importer of Plakar.\n$ export PGUSER=xxx $ export PGPORT=5432 $ export PGHOST=xxx $ export PGPASSWORD=xxx $ pg_basebackup -D - -F tar -X fetch \u0026gt; /tmp/pg_backup.tar $ plakar at /var/backups backup tar:///tmp/pg_backup.tar $ rm /tmp/pg_backup.tar This method may be slower than a directory-based backup as it requires serializing the data into a tarball. Ensure you have enough temporary disk space for the tarball before running the backup.\nRestoring a physical backup # To restore a physical backup created with pg_basebackup, use the plakar restore command to extract the backup to a local directory.\n$ plakar at /var/backups restore -to ./mydb 3bcb4fd8 This command restores the snapshot with ID 3bcb4fd8 from the Kloset store located at /var/backups to a local directory named ./mydb.\nThis command is not PostgreSQL-specific. It works for any data stored in Plakar.\nRunning PostgreSQL with Docker from a physical backup # With a physical backup, you can easily run a PostgreSQL instance using Docker, provided the PostgreSQL version matches the one used to create the backup.\n$ docker run --rm -ti --name pg -v ./mydb:/var/lib/postgresql/data postgres This command starts a PostgreSQL container using the official postgres image, mounting the restored backup directory ./mydb as the data directory for PostgreSQL.\nReplace postgres with postgres:\u0026lt;version\u0026gt; to specify the desired PostgreSQL version.\nTo connect to the running PostgreSQL instance, use:\n$ docker exec -ti pg psql -U postgres -c \u0026#39;\\l\u0026#39; Considerations # Physical vs logical backups # SQL dumps (pg_dump, pg_dumpall) are logical backups, portable across PostgreSQL versions and architectures.\npg_basebackup produces physical backups, which are faster to restore but must be used with a compatible PostgreSQL version and system layout.\nChoose the method that best fits your recovery and portability requirements.\nRead-only mounts # Plakar supports a FUSE filesystem that allows mounting a Kloset store as a read-only filesystem.\nPostgreSQL requires read-write access to its data directory, even for read-only operations. Therefore, it is not possible to run PostgreSQL directly from a read-only mount of a Kloset store.\nKloset store location # In the examples above, we used /var/backups as the Kloset store location.\nIt is possible to use other store locations, for example to store the snapshots in a cloud storage bucket. Check the guide Creating a Kloset Store for more information on setting up Kloset stores.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/postgres/pg-base-backup/","section":"Docs","summary":"How to perform physical backups of a PostgreSQL cluster using pg_basebackup, and store them with Plakar.","title":"Physical backups with pg_basebackup","type":"docs"},{"content":" Physical backups with pg_basebackup # The Plakar PostgreSQL integration uses pg_basebackup to perform physical backups of a PostgreSQL cluster. A physical backup captures the entire data directory (all databases, configuration files, and WAL segments) and stores each file as an individual record in the snapshot.\nPhysical backups are faster to restore than logical dumps but are version-locked: the backup must be restored with the same PostgreSQL major version that produced it. Selective restore of a single database or table is not possible.\nFor a deeper understanding of physical backups and base backups, refer to the official PostgreSQL documentation on pg_basebackup.\nRequirements # A running PostgreSQL server with wal_level = replica or higher in postgresql.conf. A PostgreSQL user with the REPLICATION privilege, or a superuser. pg_hba.conf allowing a replication connection from the backup host. pg_basebackup available in $PATH. Install the package # $ plakar pkg add postgresql pg_hba.conf configuration # Ensure pg_hba.conf includes an entry allowing replication connections from the backup host. For example, to allow local replication without a password:\n# TYPE DATABASE USER ADDRESS METHOD local replication all trust Adapt this to your environment and restart PostgreSQL after making changes.\nWhat gets stored in a snapshot # Each file from the PostgreSQL data directory is stored as an individual record in the snapshot, preserving paths, permissions, and timestamps. A subpath cannot be specified in the URI — pg_basebackup always backs up the entire cluster.\nA /manifest.json record is also written before the backup data, containing cluster-level metadata. See Snapshot manifest below.\nBack up the cluster # $ plakar source add mypg postgres+bin://replicator:secret@db.example.com $ plakar at /var/backups backup @mypg Restore the cluster # There is no dedicated destination connector for physical backups. Because the snapshot contains plain files, restore them to a local directory using the standard filesystem restore:\n$ plakar at /var/backups restore -to ./pgdata \u0026lt;snapshot_id\u0026gt; Then start PostgreSQL against the restored directory:\n$ docker run --rm \\ -v \u0026#34;$PWD/pgdata:/var/lib/postgresql/data\u0026#34; \\ postgres:\u0026lt;version\u0026gt; Replace \u0026lt;version\u0026gt; with the same major PostgreSQL version that was running when the backup was taken.\nTo restore directly to a remote host, use an SFTP destination:\n$ plakar restore -to sftp://user@host/var/lib/postgresql/data \u0026lt;snapshot_id\u0026gt; # then on the remote host: pg_ctl -D /var/lib/postgresql/data start List snapshots # $ plakar at /var/backups ls Source options # Parameter Default Description location — Connection URI: postgres+bin://[user[:password]@]host[:port]. A subpath is not allowed. host localhost Server hostname. Overrides the URI host. port 5432 Server port. Overrides the URI port. username — PostgreSQL replication username. Overrides the URI user. password — PostgreSQL password. Overrides the URI password. pg_bin_dir — Directory containing the PostgreSQL client binaries (pg_basebackup, psql). When omitted, binaries are resolved via $PATH. Useful when multiple PostgreSQL versions are installed. ssl_mode prefer SSL mode: disable, allow, prefer, require, verify-ca, or verify-full. ssl_cert — Path to the client SSL certificate file (PEM). ssl_key — Path to the client SSL private key file (PEM). ssl_root_cert — Path to the root CA certificate used to verify the server (PEM). Snapshot manifest # Every snapshot includes a /manifest.json record written before the backup data. It captures the cluster state at the time of backup, including the same fields as logical backups (server version, roles, tablespaces, databases, and relation details) plus pg_basebackup_version.\nMetadata collection is best-effort: if a query fails, the affected field is omitted and the backup continues.\nConsiderations # Version compatibility # Physical backups must be restored with the same PostgreSQL major version. For cross-version restores, use a logical backup instead.\nServer must be stopped before restore # Do not restore into a data directory that is in use by a running PostgreSQL instance. Stop the server first, or restore to a fresh directory.\nRead-only mounts # Plakar supports mounting a Kloset store as a read-only FUSE filesystem. PostgreSQL requires read-write access to its data directory, so it cannot run directly from a read-only mount. Always restore to a writable directory first.\nKloset store location # The examples above use /var/backups as the Kloset store. Any supported store backend can be used instead. See Create a Kloset store for details.\nSee also # PostgreSQL integration on GitHub ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/postgres/pg-base-backup/","section":"Docs","summary":"Back up a PostgreSQL cluster using the Plakar PostgreSQL integration and restore it.","title":"Physical backups with pg_basebackup","type":"docs"},{"content":" SFTP / SSH # SFTP is a protocol for securely transferring files over SSH. The SFTP integration includes three connectors:\nConnector type Description Storage connector Host a Kloset store on any SFTP-accessible server. Source connector Back up a remote directory reachable over SFTP into a Kloset store. Destination connector Restore data from a Kloset store to an SFTP target. Requirements\nAn SFTP/SSH server with appropriate read and write permissions. Typical use cases\nEncrypted backups of remote Linux/BSD/application servers over SSH. Offsite or air-gapped snapshot storage by hosting a Kloset store on an SFTP server. Data recovery workflows: restore server trees over SSH to warm or cold standby. Centralized archiving of distributed environments into one Kloset. Compatibility\nWorks with standard OpenSSH SFTP. On‑prem, cloud, and hybrid deployments supported. Legacy or proprietary SFTP variants that diverge from SSH/SFTP standards are not supported. Installation # The SFTP integration is distributed as a Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the SFTP package:\n$ plakar pkg add sftp Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build sftp A package archive will be created in the current directory (e.g., sftp_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./sftp_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nConnectors # The SFTP package provides three connectors: a storage connector for hosting Kloset stores on SFTP servers, a source connector for backing up remote directories over SFTP, and a destination connector for restoring data over SFTP.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar SFTP package provides a storage connector to host Kloset stores on SFTP servers.\nflowchart LR Source[\"Source data\"] Source --\u003e Plakar[\"Plakar\"] Via[\"Store snapshot viaSFTP storage connector\"] subgraph Store[\"SFTP Server\"] Kloset[\"Kloset Store\"] end Plakar --\u003e Via --\u003e Kloset Configure # # Configure the Kloset store $ plakar store add sftp_store sftp://sftp-prod/backups # Initialize the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; check # Backup a local folder to the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; backup /etc # Backup a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Description location sftp://[user@]host[:port]/path passphrase The Kloset store passphrase Source connector # The Plakar SFTP package provides a source connector to back up remote directories reachable over SFTP.\nflowchart LR subgraph Source[\"SFTP Server\"] FS[\"/srv/data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaSFTP source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Configure a source pointing to the remote SFTP directory $ plakar source add sftp_src sftp://sftp-prod:/srv/data # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@sftp_src\u0026#34; # Or back up the remote directory to the Kloset store on SFTP created above $ plakar at \u0026#34;@sftp_store\u0026#34; backup \u0026#34;@sftp_src\u0026#34; Options # These options can be set when configuring the source connector with plakar source add or plakar source set:\nOption Purpose location sftp://[user@]host[:port]/path of the remote directory to back up Destination connector # The Plakar SFTP package provides a destination connector to restore snapshots to remote directories reachable over SFTP.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaSFTP destination connector\"] subgraph Destination[\"SFTP Server\"] FS[\"/srv/data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Configure a destination pointing to the remote SFTP directory $ plakar destination add sftp_dst sftp://sftp-prod:/srv/restore # Restore a snapshot from a filesystem-hosted Kloset store to the remote SFTP directory $ plakar at /var/backups restore -to \u0026#34;@sftp_dst\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store on SFTP created above to the remote SFTP directory $ plakar at \u0026#34;@sftp_store\u0026#34; restore -to \u0026#34;@sftp_dst\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # These options can be set when configuring the destination connector with plakar destination add or plakar destination set:\nOption Purpose location sftp://[user@]host[:port]/path of the remote directory to restore to SSH best practices for reliability # Create a host alias (recommended) # Define an alias in ~/.ssh/config so Plakar commands stay concise and stable:\nHost sftp-prod HostName host.example.com User sftpuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar Test the alias:\n$ sftp sftp-prod Then reference it in Plakar URLs:\n$ plakar store add sftp_store sftp://sftp-prod/backups $ plakar source add sftp_src sftp://sftp-prod:/srv/data $ plakar destination add sftp_dst sftp://sftp-prod:/srv/restore Use key‑based, passwordless SSH # Unattended jobs must not prompt for passwords.\n$ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_plakar -C plakar@backup $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub sftpuser@host.example.com $ sftp -i ~/.ssh/id_ed25519_plakar sftpuser@host.example.com If your private key is encrypted, run an agent:\n$ eval \u0026#34;$(ssh-agent -s)\u0026#34; $ ssh-add ~/.ssh/id_ed25519_plakar Host keys and trust # For production, keep strict host key checking enabled and manage ~/.ssh/known_hosts normally. Avoid disabling host key checks except in isolated test environments.\nLimitations and scope # What is captured during backup\nFiles and directories reachable under the specified SFTP path File metadata (timestamps, permissions, sizes) What is not captured\nSystem configuration outside the backed‑up path (e.g., SSHD config, firewall rules) OS user and group database, running processes, or service state SSH server settings and known_hosts Snapshot consistency\nChanges during backup (creates, updates, deletes) may result in a snapshot that reflects different points in time for different files. For highly dynamic paths, consider quiescing the workload or backing up from a read‑only replica.\nTroubleshooting # Authentication or permission errors\nValidate the SSH key, username, and target path permissions. Ensure the SFTP subsystem is enabled on the server. Host key verification failed\nConnect once interactively to record the host key in ~/.ssh/known_hosts. Only use insecure_ignore_host_key=true-style options in disposable test environments. Chroot or path issues\nIf the server uses chrooted SFTP, verify the effective path inside the chroot matches your URL. Passphrase prompts\nUse ssh-agent to cache the key, or deploy a dedicated non‑encrypted key restricted to the backup account. FAQ # How do I set username, port, or identity file?\nPrefer SSH config (~/.ssh/config) with a host alias.\nCan I move snapshots between two SFTP‑hosted stores?\nYes. Define two stores, then use plakar at \u0026quot;@store1\u0026quot; sync to \u0026quot;@store2\u0026quot; to synchronize them.\nSee also # Plakar Architecture (Kloset Engine) OpenSSH / SFTP Documentation ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/integrations/sftp/","section":"Docs","summary":"Back up and restore remote directories over SFTP/SSH, and host Kloset stores on remote SFTP servers.","title":"SFTP / SSH","type":"docs"},{"content":" SFTP / SSH # SFTP is a protocol for securely transferring files over SSH. The SFTP integration includes three connectors:\nConnector type Description Storage connector Host a Kloset store on any SFTP-accessible server. Source connector Back up a remote directory reachable over SFTP into a Kloset store. Destination connector Restore data from a Kloset store to an SFTP target. Requirements\nAn SFTP/SSH server with appropriate read and write permissions. Typical use cases\nEncrypted backups of remote Linux/BSD/application servers over SSH. Offsite or air-gapped snapshot storage by hosting a Kloset store on an SFTP server. Data recovery workflows: restore server trees over SSH to warm or cold standby. Centralized archiving of distributed environments into one Kloset. Compatibility\nWorks with standard OpenSSH SFTP. On‑prem, cloud, and hybrid deployments supported. Legacy or proprietary SFTP variants that diverge from SSH/SFTP standards are not supported. Installation # The SFTP integration is distributed as a Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the SFTP package:\n$ plakar pkg add sftp Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build sftp A package archive will be created in the current directory (e.g., sftp_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./sftp_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nConnectors # The SFTP package provides three connectors: a storage connector for hosting Kloset stores on SFTP servers, a source connector for backing up remote directories over SFTP, and a destination connector for restoring data over SFTP.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar SFTP package provides a storage connector to host Kloset stores on SFTP servers.\nflowchart LR Source[\"Source data\"] Source --\u003e Plakar[\"Plakar\"] Via[\"Store snapshot viaSFTP storage connector\"] subgraph Store[\"SFTP Server\"] Kloset[\"Kloset Store\"] end Plakar --\u003e Via --\u003e Kloset Configure # # Configure the Kloset store $ plakar store add sftp_store sftp://sftp-prod/backups # Initialize the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; check # Backup a local folder to the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; backup /etc # Backup a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Description location sftp://[user@]host[:port]/path passphrase The Kloset store passphrase Source connector # The Plakar SFTP package provides a source connector to back up remote directories reachable over SFTP.\nflowchart LR subgraph Source[\"SFTP Server\"] FS[\"/srv/data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaSFTP source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Configure a source pointing to the remote SFTP directory $ plakar source add sftp_src sftp://sftp-prod:/srv/data # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@sftp_src\u0026#34; # Or back up the remote directory to the Kloset store on SFTP created above $ plakar at \u0026#34;@sftp_store\u0026#34; backup \u0026#34;@sftp_src\u0026#34; Options # These options can be set when configuring the source connector with plakar source add or plakar source set:\nOption Purpose location sftp://[user@]host[:port]/path of the remote directory to back up Destination connector # The Plakar SFTP package provides a destination connector to restore snapshots to remote directories reachable over SFTP.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaSFTP destination connector\"] subgraph Destination[\"SFTP Server\"] FS[\"/srv/data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Configure a destination pointing to the remote SFTP directory $ plakar destination add sftp_dst sftp://sftp-prod:/srv/restore # Restore a snapshot from a filesystem-hosted Kloset store to the remote SFTP directory $ plakar at /var/backups restore -to \u0026#34;@sftp_dst\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store on SFTP created above to the remote SFTP directory $ plakar at \u0026#34;@sftp_store\u0026#34; restore -to \u0026#34;@sftp_dst\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # These options can be set when configuring the destination connector with plakar destination add or plakar destination set:\nOption Purpose location sftp://[user@]host[:port]/path of the remote directory to restore to SSH best practices for reliability # Create a host alias (recommended) # Define an alias in ~/.ssh/config so Plakar commands stay concise and stable:\nHost sftp-prod HostName host.example.com User sftpuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar Test the alias:\n$ sftp sftp-prod Then reference it in Plakar URLs:\n$ plakar store add sftp_store sftp://sftp-prod/backups $ plakar source add sftp_src sftp://sftp-prod:/srv/data $ plakar destination add sftp_dst sftp://sftp-prod:/srv/restore Use key‑based, passwordless SSH # Unattended jobs must not prompt for passwords.\n$ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_plakar -C plakar@backup $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub sftpuser@host.example.com $ sftp -i ~/.ssh/id_ed25519_plakar sftpuser@host.example.com If your private key is encrypted, run an agent:\n$ eval \u0026#34;$(ssh-agent -s)\u0026#34; $ ssh-add ~/.ssh/id_ed25519_plakar Host keys and trust # For production, keep strict host key checking enabled and manage ~/.ssh/known_hosts normally. Avoid disabling host key checks except in isolated test environments.\nLimitations and scope # What is captured during backup\nFiles and directories reachable under the specified SFTP path File metadata (timestamps, permissions, sizes) What is not captured\nSystem configuration outside the backed‑up path (e.g., SSHD config, firewall rules) OS user and group database, running processes, or service state SSH server settings and known_hosts Snapshot consistency\nChanges during backup (creates, updates, deletes) may result in a snapshot that reflects different points in time for different files. For highly dynamic paths, consider quiescing the workload or backing up from a read‑only replica.\nTroubleshooting # Authentication or permission errors\nValidate the SSH key, username, and target path permissions. Ensure the SFTP subsystem is enabled on the server. Host key verification failed\nConnect once interactively to record the host key in ~/.ssh/known_hosts. Only use insecure_ignore_host_key=true-style options in disposable test environments. Chroot or path issues\nIf the server uses chrooted SFTP, verify the effective path inside the chroot matches your URL. Passphrase prompts\nUse ssh-agent to cache the key, or deploy a dedicated non‑encrypted key restricted to the backup account. FAQ # How do I set username, port, or identity file?\nPrefer SSH config (~/.ssh/config) with a host alias.\nCan I move snapshots between two SFTP‑hosted stores?\nYes. Define two stores, then use plakar at \u0026quot;@store1\u0026quot; sync to \u0026quot;@store2\u0026quot; to synchronize them.\nSee also # Plakar Architecture (Kloset Engine) OpenSSH / SFTP Documentation ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/integrations/sftp/","section":"Docs","summary":"Back up and restore remote directories over SFTP/SSH, and host Kloset stores on remote SFTP servers.","title":"SFTP / SSH","type":"docs"},{"content":" SFTP / SSH # SFTP is a protocol for securely transferring files over SSH. The SFTP integration includes three connectors:\nConnector type Description Storage connector Host a Kloset store on any SFTP-accessible server. Source connector Back up a remote directory reachable over SFTP into a Kloset store. Destination connector Restore data from a Kloset store to an SFTP target. Requirements\nAn SFTP/SSH server with appropriate read and write permissions. Typical use cases\nEncrypted backups of remote Linux/BSD/application servers over SSH. Offsite or air-gapped snapshot storage by hosting a Kloset store on an SFTP server. Data recovery workflows: restore server trees over SSH to warm or cold standby. Centralized archiving of distributed environments into one Kloset. Compatibility\nWorks with standard OpenSSH SFTP. On‑prem, cloud, and hybrid deployments supported. Legacy or proprietary SFTP variants that diverge from SSH/SFTP standards are not supported. Installation # The SFTP integration is distributed as a Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the SFTP package:\n$ plakar pkg add sftp Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build sftp A package archive will be created in the current directory (e.g., sftp_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./sftp_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nConnectors # The SFTP package provides three connectors: a storage connector for hosting Kloset stores on SFTP servers, a source connector for backing up remote directories over SFTP, and a destination connector for restoring data over SFTP.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar SFTP package provides a storage connector to host Kloset stores on SFTP servers.\nflowchart LR Source[\"Source data\"] Source --\u003e Plakar[\"Plakar\"] Via[\"Store snapshot viaSFTP storage connector\"] subgraph Store[\"SFTP Server\"] Kloset[\"Kloset Store\"] end Plakar --\u003e Via --\u003e Kloset Configure # # Configure the Kloset store $ plakar store add sftp_store sftp://sftp-prod/backups # Initialize the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; check # Backup a local folder to the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; backup /etc # Backup a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Description location sftp://[user@]host[:port]/path passphrase The Kloset store passphrase Source connector # The Plakar SFTP package provides a source connector to back up remote directories reachable over SFTP.\nflowchart LR subgraph Source[\"SFTP Server\"] FS[\"/srv/data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaSFTP source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Configure a source pointing to the remote SFTP directory $ plakar source add sftp_src sftp://sftp-prod:/srv/data # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@sftp_src\u0026#34; # Or back up the remote directory to the Kloset store on SFTP created above $ plakar at \u0026#34;@sftp_store\u0026#34; backup \u0026#34;@sftp_src\u0026#34; Options # These options can be set when configuring the source connector with plakar source add or plakar source set:\nOption Purpose location sftp://[user@]host[:port]/path of the remote directory to back up Destination connector # The Plakar SFTP package provides a destination connector to restore snapshots to remote directories reachable over SFTP.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaSFTP destination connector\"] subgraph Destination[\"SFTP Server\"] FS[\"/srv/data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Configure a destination pointing to the remote SFTP directory $ plakar destination add sftp_dst sftp://sftp-prod:/srv/restore # Restore a snapshot from a filesystem-hosted Kloset store to the remote SFTP directory $ plakar at /var/backups restore -to \u0026#34;@sftp_dst\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store on SFTP created above to the remote SFTP directory $ plakar at \u0026#34;@sftp_store\u0026#34; restore -to \u0026#34;@sftp_dst\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # These options can be set when configuring the destination connector with plakar destination add or plakar destination set:\nOption Purpose location sftp://[user@]host[:port]/path of the remote directory to restore to SSH best practices for reliability # Create a host alias (recommended) # Define an alias in ~/.ssh/config so Plakar commands stay concise and stable:\nHost sftp-prod HostName host.example.com User sftpuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar Test the alias:\n$ sftp sftp-prod Then reference it in Plakar URLs:\n$ plakar store add sftp_store sftp://sftp-prod/backups $ plakar source add sftp_src sftp://sftp-prod:/srv/data $ plakar destination add sftp_dst sftp://sftp-prod:/srv/restore Use key‑based, passwordless SSH # Unattended jobs must not prompt for passwords.\n$ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_plakar -C plakar@backup $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub sftpuser@host.example.com $ sftp -i ~/.ssh/id_ed25519_plakar sftpuser@host.example.com If your private key is encrypted, run an agent:\n$ eval \u0026#34;$(ssh-agent -s)\u0026#34; $ ssh-add ~/.ssh/id_ed25519_plakar Host keys and trust # For production, keep strict host key checking enabled and manage ~/.ssh/known_hosts normally. Avoid disabling host key checks except in isolated test environments.\nLimitations and scope # What is captured during backup\nFiles and directories reachable under the specified SFTP path File metadata (timestamps, permissions, sizes) What is not captured\nSystem configuration outside the backed‑up path (e.g., SSHD config, firewall rules) OS user and group database, running processes, or service state SSH server settings and known_hosts Snapshot consistency\nChanges during backup (creates, updates, deletes) may result in a snapshot that reflects different points in time for different files. For highly dynamic paths, consider quiescing the workload or backing up from a read‑only replica.\nTroubleshooting # Authentication or permission errors\nValidate the SSH key, username, and target path permissions. Ensure the SFTP subsystem is enabled on the server. Host key verification failed\nConnect once interactively to record the host key in ~/.ssh/known_hosts. Only use insecure_ignore_host_key=true-style options in disposable test environments. Chroot or path issues\nIf the server uses chrooted SFTP, verify the effective path inside the chroot matches your URL. Passphrase prompts\nUse ssh-agent to cache the key, or deploy a dedicated non‑encrypted key restricted to the backup account. FAQ # How do I set username, port, or identity file?\nPrefer SSH config (~/.ssh/config) with a host alias.\nCan I move snapshots between two SFTP‑hosted stores?\nYes. Define two stores, then use plakar at \u0026quot;@store1\u0026quot; sync to \u0026quot;@store2\u0026quot; to synchronize them.\nSee also # Plakar Architecture (Kloset Engine) OpenSSH / SFTP Documentation ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/integrations/sftp/","section":"Docs","summary":"Back up and restore remote directories over SFTP/SSH, and host Kloset stores on remote SFTP servers.","title":"SFTP / SSH","type":"docs"},{"content":" SFTP / SSH # SFTP is a protocol for securely transferring files over SSH. The SFTP integration includes three connectors:\nConnector type Description Storage connector Host a Kloset store on any SFTP-accessible server. Source connector Back up a remote directory reachable over SFTP into a Kloset store. Destination connector Restore data from a Kloset store to an SFTP target. Requirements\nAn SFTP/SSH server with appropriate read and write permissions. Typical use cases\nEncrypted backups of remote Linux/BSD/application servers over SSH. Offsite or air-gapped snapshot storage by hosting a Kloset store on an SFTP server. Data recovery workflows: restore server trees over SSH to warm or cold standby. Centralized archiving of distributed environments into one Kloset. Compatibility\nWorks with standard OpenSSH SFTP. On‑prem, cloud, and hybrid deployments supported. Legacy or proprietary SFTP variants that diverge from SSH/SFTP standards are not supported. Installation # The SFTP integration is distributed as a Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the SFTP package:\n$ plakar pkg add sftp Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build sftp A package archive will be created in the current directory (e.g., sftp_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./sftp_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nConnectors # The SFTP package provides three connectors: a storage connector for hosting Kloset stores on SFTP servers, a source connector for backing up remote directories over SFTP, and a destination connector for restoring data over SFTP.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar SFTP package provides a storage connector to host Kloset stores on SFTP servers.\nflowchart LR Source[\"Source data\"] Source --\u003e Plakar[\"Plakar\"] Via[\"Store snapshot viaSFTP storage connector\"] subgraph Store[\"SFTP Server\"] Kloset[\"Kloset Store\"] end Plakar --\u003e Via --\u003e Kloset Configure # # Configure the Kloset store $ plakar store add sftp_store sftp://sftp-prod/backups # Initialize the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; check # Backup a local folder to the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; backup /etc # Backup a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@sftp_store\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Description location sftp://[user@]host[:port]/path passphrase The Kloset store passphrase Source connector # The Plakar SFTP package provides a source connector to back up remote directories reachable over SFTP.\nflowchart LR subgraph Source[\"SFTP Server\"] FS[\"/srv/data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaSFTP source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Configure a source pointing to the remote SFTP directory $ plakar source add sftp_src sftp://sftp-prod:/srv/data # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@sftp_src\u0026#34; # Or back up the remote directory to the Kloset store on SFTP created above $ plakar at \u0026#34;@sftp_store\u0026#34; backup \u0026#34;@sftp_src\u0026#34; Options # These options can be set when configuring the source connector with plakar source add or plakar source set:\nOption Purpose location sftp://[user@]host[:port]/path of the remote directory to back up Destination connector # The Plakar SFTP package provides a destination connector to restore snapshots to remote directories reachable over SFTP.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaSFTP destination connector\"] subgraph Destination[\"SFTP Server\"] FS[\"/srv/data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Configure a destination pointing to the remote SFTP directory $ plakar destination add sftp_dst sftp://sftp-prod:/srv/restore # Restore a snapshot from a filesystem-hosted Kloset store to the remote SFTP directory $ plakar at /var/backups restore -to \u0026#34;@sftp_dst\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store on SFTP created above to the remote SFTP directory $ plakar at \u0026#34;@sftp_store\u0026#34; restore -to \u0026#34;@sftp_dst\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # These options can be set when configuring the destination connector with plakar destination add or plakar destination set:\nOption Purpose location sftp://[user@]host[:port]/path of the remote directory to restore to SSH best practices for reliability # Create a host alias (recommended) # Define an alias in ~/.ssh/config so Plakar commands stay concise and stable:\nHost sftp-prod HostName host.example.com User sftpuser Port 22 IdentityFile ~/.ssh/id_ed25519_plakar Test the alias:\n$ sftp sftp-prod Then reference it in Plakar URLs:\n$ plakar store add sftp_store sftp://sftp-prod/backups $ plakar source add sftp_src sftp://sftp-prod:/srv/data $ plakar destination add sftp_dst sftp://sftp-prod:/srv/restore Use key‑based, passwordless SSH # Unattended jobs must not prompt for passwords.\n$ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_plakar -C plakar@backup $ ssh-copy-id -i ~/.ssh/id_ed25519_plakar.pub sftpuser@host.example.com $ sftp -i ~/.ssh/id_ed25519_plakar sftpuser@host.example.com If your private key is encrypted, run an agent:\n$ eval \u0026#34;$(ssh-agent -s)\u0026#34; $ ssh-add ~/.ssh/id_ed25519_plakar Host keys and trust # For production, keep strict host key checking enabled and manage ~/.ssh/known_hosts normally. Avoid disabling host key checks except in isolated test environments.\nLimitations and scope # What is captured during backup\nFiles and directories reachable under the specified SFTP path File metadata (timestamps, permissions, sizes) What is not captured\nSystem configuration outside the backed‑up path (e.g., SSHD config, firewall rules) OS user and group database, running processes, or service state SSH server settings and known_hosts Snapshot consistency\nChanges during backup (creates, updates, deletes) may result in a snapshot that reflects different points in time for different files. For highly dynamic paths, consider quiescing the workload or backing up from a read‑only replica.\nTroubleshooting # Authentication or permission errors\nValidate the SSH key, username, and target path permissions. Ensure the SFTP subsystem is enabled on the server. Host key verification failed\nConnect once interactively to record the host key in ~/.ssh/known_hosts. Only use insecure_ignore_host_key=true-style options in disposable test environments. Chroot or path issues\nIf the server uses chrooted SFTP, verify the effective path inside the chroot matches your URL. Passphrase prompts\nUse ssh-agent to cache the key, or deploy a dedicated non‑encrypted key restricted to the backup account. FAQ # How do I set username, port, or identity file?\nPrefer SSH config (~/.ssh/config) with a host alias.\nCan I move snapshots between two SFTP‑hosted stores?\nYes. Define two stores, then use plakar at \u0026quot;@store1\u0026quot; sync to \u0026quot;@store2\u0026quot; to synchronize them.\nSee also # Plakar Architecture (Kloset Engine) OpenSSH / SFTP Documentation ","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/integrations/sftp/","section":"Docs","summary":"Back up and restore remote directories over SFTP/SSH, and host Kloset stores on remote SFTP servers.","title":"SFTP / SSH","type":"docs"},{"content":" Should you push or pull backups # When designing a backup strategy, one of the first decisions to make is whether backups should be pushed from the systems being backed up or pulled from a central backup server.\nBoth models are widely used. Plakar supports both approaches and lets you choose where backup operations are initiated.\nWhat does “push” mean # In a push model, each system initiates its own backups.\nEach server runs Plakar locally and sends its data to a remote Kloset Store. Backups are triggered from the source systems themselves.\nConceptually:\nThe data source controls when backups happen Each system needs access to the backup destination Backup logic is distributed across machines This model is often used when servers are autonomous or managed independently.\nWhat does “pull” mean # In a pull model, a central system initiates backups.\nA backup server connects to other machines, retrieves their data, and stores it locally in a Kloset Store. The source systems do not actively participate in the backup process.\nConceptually:\nBackups are controlled from a single place Source systems expose data but do not run backup jobs Backup logic is centralized This model is common in environments with many servers or strict access controls.\nHow Plakar differs # Plakar does not enforce one model over the other.\nPlakar treats both local paths and remote locations as sources hence the same backup mechanism can be used in either direction. The difference is simply where the backup command is executed.\nThis flexibility allows you to adapt your backup strategy to your operational and security requirements.\nChoosing between push and pull # There is no universally correct choice. Each model has advantages and trade‑offs.\nPush backups are often preferred when: # Servers are self‑managed or isolated Outbound access to a backup destination is allowed Backup schedules are owned by individual systems Simplicity is more important than central control Pull backups are often preferred when: # You want centralized control and visibility You manage a large number of servers Backup credentials and passphrases should exist in one place You want to minimize software running on production systems Hybrid approach # Some environments use a combination of both models.\nFor example:\nCritical systems push backups immediately (more frequently) Less critical systems are backed up periodically via pull Remote or restricted environments use pull through controlled access points Plakar supports any of these approaches without requiring different tools or workflows.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/explanations/should-you-pull-or-push-backups/","section":"Docs","summary":"Understand the difference between push and pull backup models, and how Plakar supports both.","title":"Should you push or pull backups","type":"docs"},{"content":" Should you push or pull backups # When designing a backup strategy, one of the first decisions to make is whether backups should be pushed from the systems being backed up or pulled from a central backup server.\nBoth models are widely used. Plakar supports both approaches and lets you choose where backup operations are initiated.\nWhat does “push” mean # In a push model, each system initiates its own backups.\nEach server runs Plakar locally and sends its data to a remote Kloset Store. Backups are triggered from the source systems themselves.\nConceptually:\nThe data source controls when backups happen Each system needs access to the backup destination Backup logic is distributed across machines This model is often used when servers are autonomous or managed independently.\nWhat does “pull” mean # In a pull model, a central system initiates backups.\nA backup server connects to other machines, retrieves their data, and stores it locally in a Kloset Store. The source systems do not actively participate in the backup process.\nConceptually:\nBackups are controlled from a single place Source systems expose data but do not run backup jobs Backup logic is centralized This model is common in environments with many servers or strict access controls.\nHow Plakar differs # Plakar does not enforce one model over the other.\nPlakar treats both local paths and remote locations as sources hence the same backup mechanism can be used in either direction. The difference is simply where the backup command is executed.\nThis flexibility allows you to adapt your backup strategy to your operational and security requirements.\nChoosing between push and pull # There is no universally correct choice. Each model has advantages and trade‑offs.\nPush backups are often preferred when: # Servers are self‑managed or isolated Outbound access to a backup destination is allowed Backup schedules are owned by individual systems Simplicity is more important than central control Pull backups are often preferred when: # You want centralized control and visibility You manage a large number of servers Backup credentials and passphrases should exist in one place You want to minimize software running on production systems Hybrid approach # Some environments use a combination of both models.\nFor example:\nCritical systems push backups immediately (more frequently) Less critical systems are backed up periodically via pull Remote or restricted environments use pull through controlled access points Plakar supports any of these approaches without requiring different tools or workflows.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/explanations/should-you-pull-or-push-backups/","section":"Docs","summary":"Understand the difference between push and pull backup models, and how Plakar supports both.","title":"Should you push or pull backups","type":"docs"},{"content":" Should you push or pull backups # When designing a backup strategy, one of the first decisions to make is whether backups should be pushed from the systems being backed up or pulled from a central backup server.\nBoth models are widely used. Plakar supports both approaches and lets you choose where backup operations are initiated.\nWhat does “push” mean # In a push model, each system initiates its own backups.\nEach server runs Plakar locally and sends its data to a remote Kloset Store. Backups are triggered from the source systems themselves.\nConceptually:\nThe data source controls when backups happen Each system needs access to the backup destination Backup logic is distributed across machines This model is often used when servers are autonomous or managed independently.\nWhat does “pull” mean # In a pull model, a central system initiates backups.\nA backup server connects to other machines, retrieves their data, and stores it locally in a Kloset Store. The source systems do not actively participate in the backup process.\nConceptually:\nBackups are controlled from a single place Source systems expose data but do not run backup jobs Backup logic is centralized This model is common in environments with many servers or strict access controls.\nHow Plakar differs # Plakar does not enforce one model over the other.\nPlakar treats both local paths and remote locations as sources hence the same backup mechanism can be used in either direction. The difference is simply where the backup command is executed.\nThis flexibility allows you to adapt your backup strategy to your operational and security requirements.\nChoosing between push and pull # There is no universally correct choice. Each model has advantages and trade‑offs.\nPush backups are often preferred when: # Servers are self‑managed or isolated Outbound access to a backup destination is allowed Backup schedules are owned by individual systems Simplicity is more important than central control Pull backups are often preferred when: # You want centralized control and visibility You manage a large number of servers Backup credentials and passphrases should exist in one place You want to minimize software running on production systems Hybrid approach # Some environments use a combination of both models.\nFor example:\nCritical systems push backups immediately (more frequently) Less critical systems are backed up periodically via pull Remote or restricted environments use pull through controlled access points Plakar supports any of these approaches without requiring different tools or workflows.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/explanations/should-you-pull-or-push-backups/","section":"Docs","summary":"Understand the difference between push and pull backup models, and how Plakar supports both.","title":"Should you push or pull backups","type":"docs"},{"content":" Should you push or pull backups # When designing a backup strategy, one of the first decisions to make is whether backups should be pushed from the systems being backed up or pulled from a central backup server.\nBoth models are widely used. Plakar supports both approaches and lets you choose where backup operations are initiated.\nWhat does “push” mean # In a push model, each system initiates its own backups.\nEach server runs Plakar locally and sends its data to a remote Kloset Store. Backups are triggered from the source systems themselves.\nConceptually:\nThe data source controls when backups happen Each system needs access to the backup destination Backup logic is distributed across machines This model is often used when servers are autonomous or managed independently.\nWhat does “pull” mean # In a pull model, a central system initiates backups.\nA backup server connects to other machines, retrieves their data, and stores it locally in a Kloset Store. The source systems do not actively participate in the backup process.\nConceptually:\nBackups are controlled from a single place Source systems expose data but do not run backup jobs Backup logic is centralized This model is common in environments with many servers or strict access controls.\nHow Plakar differs # Plakar does not enforce one model over the other.\nPlakar treats both local paths and remote locations as sources hence the same backup mechanism can be used in either direction. The difference is simply where the backup command is executed.\nThis flexibility allows you to adapt your backup strategy to your operational and security requirements.\nChoosing between push and pull # There is no universally correct choice. Each model has advantages and trade‑offs.\nPush backups are often preferred when: # Servers are self‑managed or isolated Outbound access to a backup destination is allowed Backup schedules are owned by individual systems Simplicity is more important than central control Pull backups are often preferred when: # You want centralized control and visibility You manage a large number of servers Backup credentials and passphrases should exist in one place You want to minimize software running on production systems Hybrid approach # Some environments use a combination of both models.\nFor example:\nCritical systems push backups immediately (more frequently) Less critical systems are backed up periodically via pull Remote or restricted environments use pull through controlled access points Plakar supports any of these approaches without requiring different tools or workflows.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/explanations/should-you-pull-or-push-backups/","section":"Docs","summary":"Understand the difference between push and pull backup models, and how Plakar supports both.","title":"Should you push or pull backups","type":"docs"},{"content":" Guides # This page gathers a collection of practical guides to help you use Plakar effectively. Each guide focuses on a specific topic, from basic setup to advanced configurations, so you can quickly find the instructions you need.\nScheduling Tasks Learn how to configure and run the Plakar scheduler to automate backups.\nImporting Configurations Learn how to import configurations for stores, sources, and destinations in Plakar using the import command.\nCreating a Kloset Store Create a Kloset Store on the filesystem using Plakar.\nServing a Kloset Store over HTTP Expose a Kloset Store over HTTP using the plakar server command.\nExcluding files from a backup Learn how to exclude files from a backup in Plakar\nRetrieving secrets via external command The passphrase for accessing an encrypted Kloset Store can be stored in the environment, a file, or in the configuration. It can also be retrieved via an external command, for example your password manager.\nLogging In to Plakar Log in to unlock optional features like pre-built package installation and alerting.\nManaging packages How to install, upgrade, and remove Plakar integration packages.\nPruning snapshots Remove old snapshots from a Kloset store using age, tags, or retention policies.\n","date":"18 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/guides/","section":"Docs","summary":"","title":"Guides","type":"docs"},{"content":" Physical backups # Physical backups copy raw database directories and files directly from the MySQL data directory. This approach is faster than logical backups and produces more compact output, but requires MySQL to be stopped or locked during backup.\nFor a deeper understanding of physical backups and backup methods, refer to the official MySQL documentation on backup methods.\nPrerequisites # Running MySQL server with accessible data directory Root or mysql user privileges MySQL server stopped or ability to apply read locks Data Consistency Copying the data directory while MySQL is running without proper locking produces inconsistent backups.\nBack Up with MySQL Stopped # Stop MySQL, back up the data directory, then restart:\n$ sudo systemctl stop mysql.service $ sudo plakar at /var/backups backup /var/lib/mysql $ sudo systemctl start mysql.service Data Directory Location Check datadir in /etc/mysql/my.cnf or /etc/my.cnf if your data directory differs.\nBack Up with Read Lock # Minimize downtime using FLUSH TABLES WITH READ LOCK:\n$ mysql -u root -p \u0026lt;\u0026lt; EOF FLUSH TABLES WITH READ LOCK; SYSTEM sudo plakar at /var/backups backup /var/lib/mysql UNLOCK TABLES; EOF Write Operations Blocked All write operations are blocked during backup. Lock releases automatically if connection drops.\nBack Up Specific Databases # Back up individual database directories:\n$ sudo systemctl stop mysql.service $ sudo plakar at /var/backups backup /var/lib/mysql/\u0026lt;dbname\u0026gt; $ sudo systemctl start mysql.service Replace \u0026lt;dbname\u0026gt; with the target database name.\nRestore Physical Backup # Before Restoring Stop MySQL and back up or move the current data directory.\nList snapshots:\n$ plakar at /var/backups ls Restore:\n$ sudo systemctl stop mysql.service $ sudo mv /var/lib/mysql /var/lib/mysql.old $ sudo plakar at /var/backups restore -to /var/lib/mysql \u0026lt;SNAPSHOT_ID\u0026gt; $ sudo chown -R mysql:mysql /var/lib/mysql $ sudo systemctl start mysql.service Restore Specific Databases # Restore individual database directories:\n$ sudo systemctl stop mysql.service $ sudo rm -rf /var/lib/mysql/\u0026lt;dbname\u0026gt; $ sudo plakar at /var/backups restore -to /var/lib/mysql/\u0026lt;dbname\u0026gt; \u0026lt;SNAPSHOT_ID\u0026gt; $ sudo chown -R mysql:mysql /var/lib/mysql/\u0026lt;dbname\u0026gt; $ sudo systemctl start mysql.service Run MySQL in Docker from Backup # Restore backup and run MySQL in Docker (requires matching MySQL version):\n$ plakar at /var/backups restore -to ./mydb \u0026lt;SNAPSHOT_ID\u0026gt; $ sudo chown -R 999:999 ./mydb $ docker run --rm -ti --name mysql \\ -v ./mydb:/var/lib/mysql \\ mysql:8.0 Connect:\n$ docker exec -ti mysql mysql -u root -p -e \u0026#39;SHOW DATABASES;\u0026#39; Considerations # Physical vs Logical Backups # Logical backups (mysqldump): Machine-independent, portable across MySQL versions and architectures Physical backups: Faster backup/restore, more compact, but require identical hardware and MySQL version InnoDB Consistency # Ensure backups include all InnoDB files:\nibdata* ib_logfile* (or #ib_redo* in MySQL 8.0.30+) Individual .ibd files InnoDB performs automatic crash recovery on startup if backup was consistent.\nMEMORY Tables # MEMORY tables are not stored on disk and will be empty after physical backup restoration. Use mysqldump for MEMORY tables.\nReferences # MySQL Backup and Recovery MySQL Backup Methods MySQL FLUSH Statement MySQL InnoDB Storage Engine ","date":"18 March 2026","externalUrl":null,"permalink":"/docs/main/guides/mysql/physical-backups/","section":"Docs","summary":"Perform physical backups of MySQL databases using file copy or Percona XtraBackup with Plakar.","title":"Physical backups","type":"docs"},{"content":" Physical backups # Physical backups copy raw database directories and files directly from the MySQL data directory. This approach is faster than logical backups and produces more compact output, but requires MySQL to be stopped or locked during backup.\nFor a deeper understanding of physical backups and backup methods, refer to the official MySQL documentation on backup methods.\nPrerequisites # Running MySQL server with accessible data directory Root or mysql user privileges MySQL server stopped or ability to apply read locks Data Consistency Copying the data directory while MySQL is running without proper locking produces inconsistent backups.\nBack Up with MySQL Stopped # Stop MySQL, back up the data directory, then restart:\n$ sudo systemctl stop mysql.service $ sudo plakar at /var/backups backup /var/lib/mysql $ sudo systemctl start mysql.service Data Directory Location Check datadir in /etc/mysql/my.cnf or /etc/my.cnf if your data directory differs.\nBack Up with Read Lock # Minimize downtime using FLUSH TABLES WITH READ LOCK:\n$ mysql -u root -p \u0026lt;\u0026lt; EOF FLUSH TABLES WITH READ LOCK; SYSTEM sudo plakar at /var/backups backup /var/lib/mysql UNLOCK TABLES; EOF Write Operations Blocked All write operations are blocked during backup. Lock releases automatically if connection drops.\nBack Up Specific Databases # Back up individual database directories:\n$ sudo systemctl stop mysql.service $ sudo plakar at /var/backups backup /var/lib/mysql/\u0026lt;dbname\u0026gt; $ sudo systemctl start mysql.service Replace \u0026lt;dbname\u0026gt; with the target database name.\nRestore Physical Backup # Before Restoring Stop MySQL and back up or move the current data directory.\nList snapshots:\n$ plakar at /var/backups ls Restore:\n$ sudo systemctl stop mysql.service $ sudo mv /var/lib/mysql /var/lib/mysql.old $ sudo plakar at /var/backups restore -to /var/lib/mysql \u0026lt;SNAPSHOT_ID\u0026gt; $ sudo chown -R mysql:mysql /var/lib/mysql $ sudo systemctl start mysql.service Restore Specific Databases # Restore individual database directories:\n$ sudo systemctl stop mysql.service $ sudo rm -rf /var/lib/mysql/\u0026lt;dbname\u0026gt; $ sudo plakar at /var/backups restore -to /var/lib/mysql/\u0026lt;dbname\u0026gt; \u0026lt;SNAPSHOT_ID\u0026gt; $ sudo chown -R mysql:mysql /var/lib/mysql/\u0026lt;dbname\u0026gt; $ sudo systemctl start mysql.service Run MySQL in Docker from Backup # Restore backup and run MySQL in Docker (requires matching MySQL version):\n$ plakar at /var/backups restore -to ./mydb \u0026lt;SNAPSHOT_ID\u0026gt; $ sudo chown -R 999:999 ./mydb $ docker run --rm -ti --name mysql \\ -v ./mydb:/var/lib/mysql \\ mysql:8.0 Connect:\n$ docker exec -ti mysql mysql -u root -p -e \u0026#39;SHOW DATABASES;\u0026#39; Considerations # Physical vs Logical Backups # Logical backups (mysqldump): Machine-independent, portable across MySQL versions and architectures Physical backups: Faster backup/restore, more compact, but require identical hardware and MySQL version InnoDB Consistency # Ensure backups include all InnoDB files:\nibdata* ib_logfile* (or #ib_redo* in MySQL 8.0.30+) Individual .ibd files InnoDB performs automatic crash recovery on startup if backup was consistent.\nMEMORY Tables # MEMORY tables are not stored on disk and will be empty after physical backup restoration. Use mysqldump for MEMORY tables.\nReferences # MySQL Backup and Recovery MySQL Backup Methods MySQL FLUSH Statement MySQL InnoDB Storage Engine ","date":"18 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/mysql/physical-backups/","section":"Docs","summary":"Perform physical backups of MySQL databases using file copy or Percona XtraBackup with Plakar.","title":"Physical backups","type":"docs"},{"content":" Physical backups # Physical backups copy raw database directories and files directly from the MySQL data directory. This approach is faster than logical backups and produces more compact output, but requires MySQL to be stopped or locked during backup.\nFor a deeper understanding of physical backups and backup methods, refer to the official MySQL documentation on backup methods.\nPrerequisites # Running MySQL server with accessible data directory Root or mysql user privileges MySQL server stopped or ability to apply read locks Data Consistency Copying the data directory while MySQL is running without proper locking produces inconsistent backups.\nBack Up with MySQL Stopped # Stop MySQL, back up the data directory, then restart:\n$ sudo systemctl stop mysql.service $ sudo plakar at /var/backups backup /var/lib/mysql $ sudo systemctl start mysql.service Data Directory Location Check datadir in /etc/mysql/my.cnf or /etc/my.cnf if your data directory differs.\nBack Up with Read Lock # Minimize downtime using FLUSH TABLES WITH READ LOCK:\n$ mysql -u root -p \u0026lt;\u0026lt; EOF FLUSH TABLES WITH READ LOCK; SYSTEM sudo plakar at /var/backups backup /var/lib/mysql UNLOCK TABLES; EOF Write Operations Blocked All write operations are blocked during backup. Lock releases automatically if connection drops.\nBack Up Specific Databases # Back up individual database directories:\n$ sudo systemctl stop mysql.service $ sudo plakar at /var/backups backup /var/lib/mysql/\u0026lt;dbname\u0026gt; $ sudo systemctl start mysql.service Replace \u0026lt;dbname\u0026gt; with the target database name.\nRestore Physical Backup # Before Restoring Stop MySQL and back up or move the current data directory.\nList snapshots:\n$ plakar at /var/backups ls Restore:\n$ sudo systemctl stop mysql.service $ sudo mv /var/lib/mysql /var/lib/mysql.old $ sudo plakar at /var/backups restore -to /var/lib/mysql \u0026lt;SNAPSHOT_ID\u0026gt; $ sudo chown -R mysql:mysql /var/lib/mysql $ sudo systemctl start mysql.service Restore Specific Databases # Restore individual database directories:\n$ sudo systemctl stop mysql.service $ sudo rm -rf /var/lib/mysql/\u0026lt;dbname\u0026gt; $ sudo plakar at /var/backups restore -to /var/lib/mysql/\u0026lt;dbname\u0026gt; \u0026lt;SNAPSHOT_ID\u0026gt; $ sudo chown -R mysql:mysql /var/lib/mysql/\u0026lt;dbname\u0026gt; $ sudo systemctl start mysql.service Run MySQL in Docker from Backup # Restore backup and run MySQL in Docker (requires matching MySQL version):\n$ plakar at /var/backups restore -to ./mydb \u0026lt;SNAPSHOT_ID\u0026gt; $ sudo chown -R 999:999 ./mydb $ docker run --rm -ti --name mysql \\ -v ./mydb:/var/lib/mysql \\ mysql:8.0 Connect:\n$ docker exec -ti mysql mysql -u root -p -e \u0026#39;SHOW DATABASES;\u0026#39; Considerations # Physical vs Logical Backups # Logical backups (mysqldump): Machine-independent, portable across MySQL versions and architectures Physical backups: Faster backup/restore, more compact, but require identical hardware and MySQL version InnoDB Consistency # Ensure backups include all InnoDB files:\nibdata* ib_logfile* (or #ib_redo* in MySQL 8.0.30+) Individual .ibd files InnoDB performs automatic crash recovery on startup if backup was consistent.\nMEMORY Tables # MEMORY tables are not stored on disk and will be empty after physical backup restoration. Use mysqldump for MEMORY tables.\nReferences # MySQL Backup and Recovery MySQL Backup Methods MySQL FLUSH Statement MySQL InnoDB Storage Engine ","date":"18 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/mysql/physical-backups/","section":"Docs","summary":"Perform physical backups of MySQL databases using file copy or Percona XtraBackup with Plakar.","title":"Physical backups","type":"docs"},{"content":" Guides # This page gathers a collection of practical guides to help you use Plakar effectively. Each guide focuses on a specific topic, from basic setup to advanced configurations, so you can quickly find the instructions you need.\nScheduling Tasks Learn how to configure and run the Plakar scheduler to automate backups.\nImporting Configurations Learn how to import configurations for stores, sources, and destinations in Plakar using the import command.\nCreating a Kloset Store Create a Kloset Store on the filesystem using Plakar.\nServing a Kloset Store over HTTP Expose a Kloset Store over HTTP using the plakar server command.\nExcluding files from a backup Learn how to exclude files from a backup in Plakar\nRetrieving secrets via external command The passphrase for accessing an encrypted Kloset Store can be stored in the environment, a file, or in the configuration. It can also be retrieved via an external command, for example your password manager.\nCreating a custom connector Step-by-step guide to implement and install your own Plakar connector (importer) in Go.\nLogging In to Plakar Log in to unlock optional features like pre-built package installation and alerting.\nManaging packages How to install, upgrade, and remove Plakar integration packages.\nPruning snapshots Remove old snapshots from a Kloset store using age, tags, or retention policies.\nMySQL Guides on backing up and restoring MySQL database\nPostgreSQL Guides on backing up and restoring PostgreSQL databases\nOVHcloud Guides on running backups in OVHcloud\nExoscale Guides on running backups in Exoscale\n","date":"16 March 2026","externalUrl":null,"permalink":"/docs/main/guides/","section":"Docs","summary":"","title":"Guides","type":"docs"},{"content":" Guides # This page gathers a collection of practical guides to help you use Plakar effectively. Each guide focuses on a specific topic, from basic setup to advanced configurations, so you can quickly find the instructions you need.\nScheduling Tasks Learn how to configure and run the Plakar scheduler to automate backups.\nImporting Configurations Learn how to import configurations for stores, sources, and destinations in Plakar using the import command.\nCreating a Kloset Store Create a Kloset Store on the filesystem using Plakar.\nServing a Kloset Store over HTTP Expose a Kloset Store over HTTP using the plakar server command.\nExcluding files from a backup Learn how to exclude files from a backup in Plakar\nRetrieving secrets via external command The passphrase for accessing an encrypted Kloset Store can be stored in the environment, a file, or in the configuration. It can also be retrieved via an external command, for example your password manager.\nLogging In to Plakar Log in to unlock optional features like pre-built package installation and alerting.\nManaging packages How to install, upgrade, and remove Plakar integration packages.\nPruning snapshots Remove old snapshots from a Kloset store using age, tags, or retention policies.\nMySQL Guides on backing up and restoring MySQL database\nPostgreSQL Guides on backing up and restoring PostgreSQL databases\nOVHcloud Guides on running backups in OVHcloud\nExoscale Guides on running backups in Exoscale\n","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/","section":"Docs","summary":"","title":"Guides","type":"docs"},{"content":" Guides # This page gathers a collection of practical guides to help you use Plakar effectively. Each guide focuses on a specific topic, from basic setup to advanced configurations, so you can quickly find the instructions you need.\nScheduling Tasks Learn how to configure and run the Plakar scheduler to automate backups.\nImporting Configurations Learn how to import configurations for stores, sources, and destinations in Plakar using the import command.\nCreating a Kloset Store Create a Kloset Store on the filesystem using Plakar.\nServing a Kloset Store over HTTP Expose a Kloset Store over HTTP using the plakar server command.\nExcluding files from a backup Learn how to exclude files from a backup in Plakar\nRetrieving secrets via external command The passphrase for accessing an encrypted Kloset Store can be stored in the environment, a file, or in the configuration. It can also be retrieved via an external command, for example your password manager.\nCreating a custom connector Step-by-step guide to implement and install your own Plakar connector (importer) in Go.\nLogging In to Plakar Log in to unlock optional features like pre-built package installation and alerting.\nManaging packages How to install, upgrade, and remove Plakar integration packages.\nPruning snapshots Remove old snapshots from a Kloset store using age, tags, or retention policies.\nMySQL Guides on backing up and restoring MySQL database\nPostgreSQL Guides on backing up and restoring PostgreSQL databases\nOVHcloud Guides on running backups in OVHcloud\nExoscale Guides on running backups in Exoscale\n","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/","section":"Docs","summary":"","title":"Guides","type":"docs"},{"content":" Importing Configurations # The commands plakar store, plakar source and plakar destination configure storage locations, backup sources, and restore destinations respectively.\nEach command includes an import subcommand for importing configurations from different sources.\nWhy you\u0026rsquo;d need to import configurations # Plakar stores, sources, and destinations each require configuration data such as credentials, locations and passphrases that you\u0026rsquo;d otherwise have to re-enter manually on every machine.\nImporting lets you replicate a working setup across servers, share configurations across a team, migrate from one machine to another, or bootstrap a new installation from a backup of your config.\nIt also lets you bring in configurations from other tools directly: if you already have rclone remotes configured, you can import them as Plakar stores without duplicating the credentials by hand.\nBasic Usage # The import subcommand can read configuration data from:\nStandard input (stdin) — useful for piping from other commands A file specified with the -config option URLs (when using -config with a URL) Importing from a Configuration File # Use the -config option to specify a configuration file to import. Create a YAML file with the appropriate structure for the type of configuration you\u0026rsquo;re importing.\nFor example, to import store configurations, create a file like my-stores.yaml:\nminio: access_key: minioadmin location: s3://localhost:9000/kloset passphrase: superpassphrase secret_access_key: minioadmin use_tls: \u0026#34;false\u0026#34; Then import it with:\n$ plakar store import -config my-stores.yaml Similarly for sources and destinations:\n$ plakar source import -config my-sources.yaml $ plakar destination import -config my-destinations.yaml The configuration files should be in YAML format with named sections for each configuration entry.\nImporting from Piped Input # You can pipe configuration data directly from other commands:\n# Import a specific source configuration as a destination $ plakar source show -secrets | plakar destination import mybucket # Import all sources as destinations $ plakar source show -secrets | plakar destination import # Import from rclone configuration $ rclone config show -secrets | plakar store import -rclone koofr Section Selection # You can specify which sections to import by listing their names. Sections can be renamed during import by appending :newname.\n# Import only specific sections $ plakar store import -config stores.yaml section1 section2 # Import and rename sections $ plakar store import -config stores.yaml oldname:newname Configuration File Format # Configuration files should be in YAML format. Each top-level key represents a named configuration section.\nStore Configuration Example # mystorage: location: s3://mybucket access_key: myaccesskey secret_access_key: mysecretkey localbackup: location: /var/backups Source Configuration Example # myapp: location: /var/www/myapp excludes: \u0026#34;*.log,*.tmp\u0026#34; database: location: postgresql://user:pass@localhost/mydb Destination Configuration Example # restorepoint: location: /mnt/restore permissions: 0755 cloudrestore: location: s3://restore-bucket access_key: restorekey secret_access_key: restoresecret Practical Examples # Migrating from rclone # If you have rclone configurations, you can easily import them as Plakar stores:\n# Show available rclone remotes $ rclone config show # Import a specific rclone remote as a Plakar store $ rclone config show | plakar store import -rclone myremote Bulk Configuration Management # You can export and import configurations between different Plakar installations:\n# Export current store configurations $ plakar store show -secrets \u0026gt; stores-backup.yaml # Import on another machine $ plakar store import -config stores-backup.yaml Converting Sources to Destinations # A common use case is to use the same locations for both backup sources and restore destinations:\n# Import all sources as destinations $ plakar source show -secrets | plakar destination import # Import a specific source as a destination with a new name $ plakar source show -secrets | plakar destination import mysource:myrestore Verification # After importing, verify the configuration was imported correctly:\n$ plakar store show $ plakar source show $ plakar destination show Use the check subcommand to validate configurations:\n$ plakar store check mystore $ plakar source check mysource $ plakar destination check mydest Troubleshooting # Common Issues # Permission Denied: Ensure you have read access to the configuration file and write access to Plakar\u0026rsquo;s configuration directory.\nInvalid YAML: Validate your YAML syntax before importing. Use tools like yamllint or online validators.\nName Conflicts: Use -overwrite to replace existing configurations, or rename sections during import.\nrclone Import Issues: Ensure rclone is installed and the specified remote exists in your rclone configuration.\n","date":"16 March 2026","externalUrl":null,"permalink":"/docs/main/guides/importing-configurations/","section":"Docs","summary":"Learn how to import configurations for stores, sources, and destinations in Plakar using the import command.","title":"Importing Configurations","type":"docs"},{"content":" Importing Configurations # The commands plakar store, plakar source and plakar destination configure storage locations, backup sources, and restore destinations respectively.\nEach command includes an import subcommand for importing configurations from different sources.\nWhy you\u0026rsquo;d need to import configurations # Plakar stores, sources, and destinations each require configuration data such as credentials, locations and passphrases that you\u0026rsquo;d otherwise have to re-enter manually on every machine.\nImporting lets you replicate a working setup across servers, share configurations across a team, migrate from one machine to another, or bootstrap a new installation from a backup of your config.\nIt also lets you bring in configurations from other tools directly: if you already have rclone remotes configured, you can import them as Plakar stores without duplicating the credentials by hand.\nBasic Usage # The import subcommand can read configuration data from:\nStandard input (stdin) — useful for piping from other commands A file specified with the -config option URLs (when using -config with a URL) Importing from a Configuration File # Use the -config option to specify a configuration file to import. Create a YAML file with the appropriate structure for the type of configuration you\u0026rsquo;re importing.\nFor example, to import store configurations, create a file like my-stores.yaml:\nminio: access_key: minioadmin location: s3://localhost:9000/kloset passphrase: superpassphrase secret_access_key: minioadmin use_tls: \u0026#34;false\u0026#34; Then import it with:\n$ plakar store import -config my-stores.yaml Similarly for sources and destinations:\n$ plakar source import -config my-sources.yaml $ plakar destination import -config my-destinations.yaml The configuration files should be in YAML format with named sections for each configuration entry.\nImporting from Piped Input # You can pipe configuration data directly from other commands:\n# Import a specific source configuration as a destination $ plakar source show -secrets | plakar destination import mybucket # Import all sources as destinations $ plakar source show -secrets | plakar destination import # Import from rclone configuration $ rclone config show -secrets | plakar store import -rclone koofr Section Selection # You can specify which sections to import by listing their names. Sections can be renamed during import by appending :newname.\n# Import only specific sections $ plakar store import -config stores.yaml section1 section2 # Import and rename sections $ plakar store import -config stores.yaml oldname:newname Configuration File Format # Configuration files should be in YAML format. Each top-level key represents a named configuration section.\nStore Configuration Example # mystorage: location: s3://mybucket access_key: myaccesskey secret_access_key: mysecretkey localbackup: location: /var/backups Source Configuration Example # myapp: location: /var/www/myapp excludes: \u0026#34;*.log,*.tmp\u0026#34; database: location: postgresql://user:pass@localhost/mydb Destination Configuration Example # restorepoint: location: /mnt/restore permissions: 0755 cloudrestore: location: s3://restore-bucket access_key: restorekey secret_access_key: restoresecret Practical Examples # Migrating from rclone # If you have rclone configurations, you can easily import them as Plakar stores:\n# Show available rclone remotes $ rclone config show # Import a specific rclone remote as a Plakar store $ rclone config show | plakar store import -rclone myremote Bulk Configuration Management # You can export and import configurations between different Plakar installations:\n# Export current store configurations $ plakar store show -secrets \u0026gt; stores-backup.yaml # Import on another machine $ plakar store import -config stores-backup.yaml Converting Sources to Destinations # A common use case is to use the same locations for both backup sources and restore destinations:\n# Import all sources as destinations $ plakar source show -secrets | plakar destination import # Import a specific source as a destination with a new name $ plakar source show -secrets | plakar destination import mysource:myrestore Verification # After importing, verify the configuration was imported correctly:\n$ plakar store show $ plakar source show $ plakar destination show Use the check subcommand to validate configurations:\n$ plakar store check mystore $ plakar source check mysource $ plakar destination check mydest Troubleshooting # Common Issues # Permission Denied: Ensure you have read access to the configuration file and write access to Plakar\u0026rsquo;s configuration directory.\nInvalid YAML: Validate your YAML syntax before importing. Use tools like yamllint or online validators.\nName Conflicts: Use -overwrite to replace existing configurations, or rename sections during import.\nrclone Import Issues: Ensure rclone is installed and the specified remote exists in your rclone configuration.\n","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/guides/importing-configurations/","section":"Docs","summary":"Learn how to import configurations for stores, sources, and destinations in Plakar using the import command.","title":"Importing Configurations","type":"docs"},{"content":" Importing Configurations # The commands plakar store, plakar source and plakar destination configure storage locations, backup sources, and restore destinations respectively.\nEach command includes an import subcommand for importing configurations from different sources.\nWhy you\u0026rsquo;d need to import configurations # Plakar stores, sources, and destinations each require configuration data such as credentials, locations and passphrases that you\u0026rsquo;d otherwise have to re-enter manually on every machine.\nImporting lets you replicate a working setup across servers, share configurations across a team, migrate from one machine to another, or bootstrap a new installation from a backup of your config.\nIt also lets you bring in configurations from other tools directly: if you already have rclone remotes configured, you can import them as Plakar stores without duplicating the credentials by hand.\nBasic Usage # The import subcommand can read configuration data from:\nStandard input (stdin) — useful for piping from other commands A file specified with the -config option URLs (when using -config with a URL) Importing from a Configuration File # Use the -config option to specify a configuration file to import. Create a YAML file with the appropriate structure for the type of configuration you\u0026rsquo;re importing.\nFor example, to import store configurations, create a file like my-stores.yaml:\nminio: access_key: minioadmin location: s3://localhost:9000/kloset passphrase: superpassphrase secret_access_key: minioadmin use_tls: \u0026#34;false\u0026#34; Then import it with:\n$ plakar store import -config my-stores.yaml Similarly for sources and destinations:\n$ plakar source import -config my-sources.yaml $ plakar destination import -config my-destinations.yaml The configuration files should be in YAML format with named sections for each configuration entry.\nImporting from Piped Input # You can pipe configuration data directly from other commands:\n# Import a specific source configuration as a destination $ plakar source show -secrets | plakar destination import mybucket # Import all sources as destinations $ plakar source show -secrets | plakar destination import # Import from rclone configuration $ rclone config show -secrets | plakar store import -rclone koofr Section Selection # You can specify which sections to import by listing their names. Sections can be renamed during import by appending :newname.\n# Import only specific sections $ plakar store import -config stores.yaml section1 section2 # Import and rename sections $ plakar store import -config stores.yaml oldname:newname Configuration File Format # Configuration files should be in YAML format. Each top-level key represents a named configuration section.\nStore Configuration Example # mystorage: location: s3://mybucket access_key: myaccesskey secret_access_key: mysecretkey localbackup: location: /var/backups Source Configuration Example # myapp: location: /var/www/myapp excludes: \u0026#34;*.log,*.tmp\u0026#34; database: location: postgresql://user:pass@localhost/mydb Destination Configuration Example # restorepoint: location: /mnt/restore permissions: 0755 cloudrestore: location: s3://restore-bucket access_key: restorekey secret_access_key: restoresecret Practical Examples # Migrating from rclone # If you have rclone configurations, you can easily import them as Plakar stores:\n# Show available rclone remotes $ rclone config show # Import a specific rclone remote as a Plakar store $ rclone config show | plakar store import -rclone myremote Bulk Configuration Management # You can export and import configurations between different Plakar installations:\n# Export current store configurations $ plakar store show -secrets \u0026gt; stores-backup.yaml # Import on another machine $ plakar store import -config stores-backup.yaml Converting Sources to Destinations # A common use case is to use the same locations for both backup sources and restore destinations:\n# Import all sources as destinations $ plakar source show -secrets | plakar destination import # Import a specific source as a destination with a new name $ plakar source show -secrets | plakar destination import mysource:myrestore Verification # After importing, verify the configuration was imported correctly:\n$ plakar store show $ plakar source show $ plakar destination show Use the check subcommand to validate configurations:\n$ plakar store check mystore $ plakar source check mysource $ plakar destination check mydest Troubleshooting # Common Issues # Permission Denied: Ensure you have read access to the configuration file and write access to Plakar\u0026rsquo;s configuration directory.\nInvalid YAML: Validate your YAML syntax before importing. Use tools like yamllint or online validators.\nName Conflicts: Use -overwrite to replace existing configurations, or rename sections during import.\nrclone Import Issues: Ensure rclone is installed and the specified remote exists in your rclone configuration.\n","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/importing-configurations/","section":"Docs","summary":"Learn how to import configurations for stores, sources, and destinations in Plakar using the import command.","title":"Importing Configurations","type":"docs"},{"content":" Importing Configurations # The commands plakar store, plakar source and plakar destination configure storage locations, backup sources, and restore destinations respectively.\nEach command includes an import subcommand for importing configurations from different sources.\nWhy you\u0026rsquo;d need to import configurations # Plakar stores, sources, and destinations each require configuration data such as credentials, locations and passphrases that you\u0026rsquo;d otherwise have to re-enter manually on every machine.\nImporting lets you replicate a working setup across servers, share configurations across a team, migrate from one machine to another, or bootstrap a new installation from a backup of your config.\nIt also lets you bring in configurations from other tools directly: if you already have rclone remotes configured, you can import them as Plakar stores without duplicating the credentials by hand.\nBasic Usage # The import subcommand can read configuration data from:\nStandard input (stdin) — useful for piping from other commands A file specified with the -config option URLs (when using -config with a URL) Importing from a Configuration File # Use the -config option to specify a configuration file to import. Create a YAML file with the appropriate structure for the type of configuration you\u0026rsquo;re importing.\nFor example, to import store configurations, create a file like my-stores.yaml:\nminio: access_key: minioadmin location: s3://localhost:9000/kloset passphrase: superpassphrase secret_access_key: minioadmin use_tls: \u0026#34;false\u0026#34; Then import it with:\n$ plakar store import -config my-stores.yaml Similarly for sources and destinations:\n$ plakar source import -config my-sources.yaml $ plakar destination import -config my-destinations.yaml The configuration files should be in YAML format with named sections for each configuration entry.\nImporting from Piped Input # You can pipe configuration data directly from other commands:\n# Import a specific source configuration as a destination $ plakar source show -secrets | plakar destination import mybucket # Import all sources as destinations $ plakar source show -secrets | plakar destination import # Import from rclone configuration $ rclone config show -secrets | plakar store import -rclone koofr Section Selection # You can specify which sections to import by listing their names. Sections can be renamed during import by appending :newname.\n# Import only specific sections $ plakar store import -config stores.yaml section1 section2 # Import and rename sections $ plakar store import -config stores.yaml oldname:newname Configuration File Format # Configuration files should be in YAML format. Each top-level key represents a named configuration section.\nStore Configuration Example # mystorage: location: s3://mybucket access_key: myaccesskey secret_access_key: mysecretkey localbackup: location: /var/backups Source Configuration Example # myapp: location: /var/www/myapp excludes: \u0026#34;*.log,*.tmp\u0026#34; database: location: postgresql://user:pass@localhost/mydb Destination Configuration Example # restorepoint: location: /mnt/restore permissions: 0755 cloudrestore: location: s3://restore-bucket access_key: restorekey secret_access_key: restoresecret Practical Examples # Migrating from rclone # If you have rclone configurations, you can easily import them as Plakar stores:\n# Show available rclone remotes $ rclone config show # Import a specific rclone remote as a Plakar store $ rclone config show | plakar store import -rclone myremote Bulk Configuration Management # You can export and import configurations between different Plakar installations:\n# Export current store configurations $ plakar store show -secrets \u0026gt; stores-backup.yaml # Import on another machine $ plakar store import -config stores-backup.yaml Converting Sources to Destinations # A common use case is to use the same locations for both backup sources and restore destinations:\n# Import all sources as destinations $ plakar source show -secrets | plakar destination import # Import a specific source as a destination with a new name $ plakar source show -secrets | plakar destination import mysource:myrestore Verification # After importing, verify the configuration was imported correctly:\n$ plakar store show $ plakar source show $ plakar destination show Use the check subcommand to validate configurations:\n$ plakar store check mystore $ plakar source check mysource $ plakar destination check mydest Troubleshooting # Common Issues # Permission Denied: Ensure you have read access to the configuration file and write access to Plakar\u0026rsquo;s configuration directory.\nInvalid YAML: Validate your YAML syntax before importing. Use tools like yamllint or online validators.\nName Conflicts: Use -overwrite to replace existing configurations, or rename sections during import.\nrclone Import Issues: Ensure rclone is installed and the specified remote exists in your rclone configuration.\n","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/importing-configurations/","section":"Docs","summary":"Learn how to import configurations for stores, sources, and destinations in Plakar using the import command.","title":"Importing Configurations","type":"docs"},{"content":" Installation # Developer Version This is the developer version of Plakar and it can only be installed from source. Only stable versions have distributed assets that can be installed using other OS specific methods\nTo build Plakar from source. You will need:\nGo (Golang) make (available by default on most Linux distributions; on macOS, install the Xcode command line tools with xcode-select --install; on Windows, use WSL or a tool like GnuWin32 Make) Clone the repository and run make:\n$ git clone https://github.com/PlakarKorp/plakar.git $ cd plakar $ make This produces a plakar binary in the current directory. To build a specific release version, check out the corresponding tag before running make:\n$ git checkout v1.1.0 $ make Verifying the Installation # Verify the installation by running:\n$ plakar version This should return the expected version number, for example plakar/v1.1.0.\nDownloading Specific Versions # All release versions of Plakar are available directly from GitHub on the project\u0026rsquo;s release page.\nFor each release, check under the \u0026ldquo;Assets\u0026rdquo; section for a list of pre-built packages. They follow the naming convention plakar_\u0026lt;version\u0026gt;_\u0026lt;os\u0026gt;_\u0026lt;arch\u0026gt;.\u0026lt;format\u0026gt;.\nInstallation Troubleshooting # If you encounter any issues during installation, or notice that this documentation is out of date:\nEnsure you are following the instructions for the correct version of plakar. Open an issue on the GitHub issue tracker. Next Steps: Getting Started # Now that you have plakar installed, we recommend proceeding to the Quickstart guide to set up your first backup.\n","date":"11 March 2026","externalUrl":null,"permalink":"/docs/main/quickstart/installation/","section":"Docs","summary":"Install Plakar and verify your installation.","title":"Installation","type":"docs"},{"content":" Installation # Several installation methods are available depending on your operating system. Choose the method that best suits your environment.\nInstallation Methods # Debian/Ubuntu (APT) RPM-based (DNF) macOS (Homebrew) Windows Go Install Others For Debian-based operating systems (such as Ubuntu or Debian), the easiest way is to use our APT repository. First, install necessary dependencies and add the repository\u0026rsquo;s GPG key:\n$ sudo apt-get update $ sudo apt-get install -y curl gnupg2 $ curl -fsSL https://plakar.io/dist/keys/community-v1.0.0.gpg | sudo gpg --dearmor -o /usr/share/keyrings/plakar.gpg $ echo \u0026#34;deb [signed-by=/usr/share/keyrings/plakar.gpg] https://plakar.io/dist/repos/deb/ stable main\u0026#34; | sudo tee /etc/apt/sources.list.d/plakar.list Then update the package list and install plakar:\n$ sudo apt-get update $ sudo apt-get install plakar For operating systems which use RPM-based packages (such as Fedora), the easiest way is to use our DNF repository.\nFirst, set up the repository:\n$ cat \u0026lt;\u0026lt;EOF | sudo tee /etc/yum.repos.d/plakar.repo [plakar] name=Plakar Repository baseurl=https://plakar.io/dist/repos/rpm/$(uname -m)/ enabled=1 gpgcheck=0 gpgkey=https://plakar.io/dist/keys/community-v1.0.0.gpg EOF Then install plakar with:\n$ sudo dnf install plakar The simplest way to install Plakar on macOS is with Homebrew. Ensure you have Homebrew installed, then add the Plakar tap and install Plakar with:\n$ brew install plakarkorp/tap/plakar If you prefer not to use our tap, you can install from the default Homebrew repository instead with brew install plakar. Note that the version in the default repository may not always be the latest release.\nmacOS includes built-in protection against untrusted binaries. To allow plakar to run, you will need to explicitly approve it in the Privacy \u0026amp; Security settings.\nThe simplest way to install Plakar on Windows is by downloading the pre-built package from the Download page.\nThe downloaded package is simply an archive containing the executable. Copy this to anywhere on your system PATH, or run it directly from a shell where it is installed.\nTo install using the Go toolchain, use go install with the version you want to install, or latest:\n$ go install \u0026#34;github.com/PlakarKorp/plakar@v1.0.6\u0026#34; This will install the binary into your $GOPATH/bin directory, which you may need to add to your $PATH if it is not already there.\nArch Linux # Plakar is available on the Arch User Repository (AUR). If you use an AUR helper such as yay, you can install it with:\n$ yay -S plakar Building from Source # You can build Plakar from source. You will need:\nGo (Golang) make (available by default on most Linux distributions; on macOS, install the Xcode command line tools with xcode-select --install; on Windows, use WSL or a tool like GnuWin32 Make) Clone the repository and run make:\n$ git clone https://github.com/PlakarKorp/plakar.git $ cd plakar $ make This produces a plakar binary in the current directory. To build a specific release version, check out the corresponding tag before running make:\n$ git checkout v1.0.6 $ make Other Platforms # For other supported operating systems, or for an alternative to the methods mentioned above, it is possible to download pre-built binaries for different platforms and architectures from the Download page.\nThese are in standard formats for the relevant platforms, so consult OS-specific documentation for how to install them.\nVerifying the Installation # Verify the installation by running:\n$ plakar version This should return the expected version number, for example plakar/v1.0.6.\nDownloading Specific Versions # All release versions of Plakar are available directly from GitHub on the project\u0026rsquo;s release page.\nFor each release, check under the \u0026ldquo;Assets\u0026rdquo; section for a list of pre-built packages. They follow the naming convention plakar_\u0026lt;version\u0026gt;_\u0026lt;os\u0026gt;_\u0026lt;arch\u0026gt;.\u0026lt;format\u0026gt;.\nInstallation Troubleshooting # If you encounter any issues during installation, or notice that this documentation is out of date:\nEnsure you are following the instructions for the correct version of plakar. Open an issue on the GitHub issue tracker. Next Steps: Getting Started # Now that you have plakar installed, we recommend proceeding to the Quickstart guide to set up your first backup.\n","date":"11 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/quickstart/installation/","section":"Docs","summary":"Install Plakar and verify your installation.","title":"Installation","type":"docs"},{"content":" Installation # Beta Version Plakar v1.1.0 is still a beta release and it can only be installed from source. Only stable versions have distributed assets that can be installed using other OS specific methods\nTo build Plakar from source. You will need:\nGo (Golang) make (available by default on most Linux distributions; on macOS, install the Xcode command line tools with xcode-select --install; on Windows, use WSL or a tool like GnuWin32 Make) Clone the repository and run make:\n$ git clone https://github.com/PlakarKorp/plakar.git $ cd plakar $ make This produces a plakar binary in the current directory. To build a specific release version, check out the corresponding tag before running make:\n$ git checkout v1.1.0 $ make Verifying the Installation # Verify the installation by running:\n$ plakar version This should return the expected version number, for example plakar/v1.1.0.\nDownloading Specific Versions # All release versions of Plakar are available directly from GitHub on the project\u0026rsquo;s release page.\nFor each release, check under the \u0026ldquo;Assets\u0026rdquo; section for a list of pre-built packages. They follow the naming convention plakar_\u0026lt;version\u0026gt;_\u0026lt;os\u0026gt;_\u0026lt;arch\u0026gt;.\u0026lt;format\u0026gt;.\nInstallation Troubleshooting # If you encounter any issues during installation, or notice that this documentation is out of date:\nEnsure you are following the instructions for the correct version of plakar. Open an issue on the GitHub issue tracker. Next Steps: Getting Started # Now that you have plakar installed, we recommend proceeding to the Quickstart guide to set up your first backup.\n","date":"11 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/quickstart/installation/","section":"Docs","summary":"Install Plakar and verify your installation.","title":"Installation","type":"docs"},{"content":" Plakar: v1.1.0 # Getting Started Overview Installation Quickstart Synchronize multiple copies Backup non-filesystem data Guides Scheduling Tasks Importing Configurations Creating a Kloset Store Serving a Kloset Store over HTTP Excluding files from a backup Retrieving secrets via external command Creating a custom connector Logging In to Plakar Managing packages Pruning snapshots MySQL PostgreSQL OVHcloud Exoscale Integrations S3 SFTP / SSH Notion Dropbox iCloud Drive Koofr Google Drive OneDrive OpenDrive Proton Drive Proxmox Kubernetes etcd Explanations How Plakar Works Should you push or pull backups How many Kloset Stores should you create Why multiple backup copies matter Why you need to backup your SaaS How Maintenance Works References Plakar Ptar Command line syntax Go Kloset SDK Commands Community ","date":"11 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/","section":"Docs","summary":"Plakar documentation hub, find guides, references, and resources for working with Plakar.","title":"Plakar: v1.1.0 (beta)","type":"docs"},{"content":" Billing # Plakar Control Plane is available under three plans designed to fit different needs and organization sizes.\nFree Plan # The Free plan is the easiest way to discover and get started with Plakar Control Plane. It supports up to 500GB of managed data, making it well suited for small configurations, personal use, or evaluating the platform before committing to a larger deployment.\nCommunity Plan # The Community plan is available for organizations that qualify under one of the following:\nNon-profit organizations Startups under 3 years old and pre-Series A Organizations operating in war zones that require assistance Eligible contributors or community members If you think your organization qualifies, reach out to us at sales@plakar.io and we will get back to you.\nEnterprise Plan # The Enterprise plan is a consumption-based plan built for organizations managing large or complex backup infrastructures. It includes premium support and gives you direct access to the Plakar team when you need it. To discuss pricing and what is included, contact sales@plakar.io.\nUpgrading your plan # To upgrade your plan or make changes to your license, reach out to sales@plakar.io and the team will help you get sorted.\n","date":"29 April 2026","externalUrl":null,"permalink":"/control-plane-docs/intro/billing/","section":"Control Plane Docs","summary":"Plakar Control Plane plans and how to manage your license.","title":"Billing \u0026 Plans","type":"control-plane-docs"},{"content":" Notion # This integration is currently in beta This integration has not yet reached a stable release. It\u0026rsquo;s functional for testing and evaluation but should not be relied upon for production use cases.\nSee the current limitations section below for current known issues and planned improvements.\nThe Notion integration enables backup and restoration of Notion workspaces through the official Notion API. All workspace content—including pages, databases, blocks, and hierarchical relationships—is captured as structured JSON and stored in a Kloset store.\nThe Notion integration provides two connectors:\nConnector type Description Source connector Back up a Notion workspace into a Kloset store. Destination connector Restore a Notion workspace from a Kloset store. Installation # The Notion package can be installed using pre-built binaries or compiled from source.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Notion package:\n$ plakar pkg add notion Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build notion A package archive will be created in the current directory (e.g., notion_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./notion_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nNotion API Setup # Before using the Plakar Notion integration, you must create an internal integration in your Notion workspace.\nCreate a Notion integration # Go to https://www.notion.so/profile/integrations Click Create a new integration Configure the integration: Name: Choose a descriptive name (e.g., \u0026ldquo;Plakar Backup\u0026rdquo;) Type: Select Internal Associated workspace: Select the workspace you want to back up Click Create to create the integration Click on the Configure integration settings in the success popup Configure integration capabilities # After creating the integration, you need to enable the required capabilities:\nIn the integration settings page, scroll to the Capabilities section Enable the following capabilities: Read content: Required for backing up pages and databases Update content: Required for restoring pages and databases Insert content: Required for restoring pages and databases Read comments: Required for backing up discussion threads Insert comments: Required for restoring discussion threads Click Save Copy the API token # In the integration settings page just before the Capabilities section, there\u0026rsquo;s the token section Click Show on the \u0026ldquo;Internal Integration Secret\u0026rdquo; Copy the token (format: ntn_xxx...) Notion Token Keep this token secure. Anyone with this token can access and modify pages that have been shared with this integration.\nSetup pages access to the integration # You must enable top-level page access to the integration:\nClick on Edit Access Select the top-level pages and databases you want to backup Click Save Sharing Pages Once a page is shared with the integration, all child pages are automatically included during backup. You only need to share top-level pages.\nSource connector # The source connector retrieves Notion workspace data via the API and stores it as structured JSON. This includes page content, databases, blocks, metadata, and hierarchical relationships.\nflowchart LR subgraph Source[\"Notion\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaNotion source connector\"] Transform[\"Transform data as a structured JSON document\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Transform --\u003e Store Requirements # Before configuring the source connector, ensure you have:\nCompleted Notion API setup (see section above) Notion API token from your integration Shared at least one page with your integration Configuration # Create a Notion source configuration:\n$ plakar source add mynotion location=notion:// token=$NOTION_API_TOKEN Back up the workspace to a Kloset store:\n$ plakar at /var/backups backup \u0026#34;@mynotion\u0026#34; Configuration options # Option Required Description location Yes Must be set to notion:// token Yes Your Notion API token (format: ntn_xxx...) What gets backed up # The source connector captures:\nPages: All content, blocks, and page properties Databases: Structure, properties, views, and all entries Media: Images, files, and embedded content (stored as references) Comments: Discussion threads and annotations Metadata: Creation dates, authors, last edited information Relationships: Parent-child hierarchies and database links Destination connector # The destination connector reads structured JSON from a Kloset store and recreates pages and content in Notion via the API.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Transform[\"Reconstruct data from structured JSON document\"] Via[\"Restore data viaNotion destination connector\"] subgraph Destination[\"Notion\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Transform --\u003e Via --\u003e FS Requirements # Before configuring the destination connector, ensure:\nCompleted Notion API setup with insert capabilities enabled Created or identified a target page where content will be restored Shared the target page with your integration Have the Page ID of the target page Finding a Page ID # You need to get the Page ID of the page where you want to restore the backup contents. To find a Notion Page ID:\nOpen the page in Notion Click Share in the top right, then Copy link The URL format is: https://www.notion.so/PageName-PAGE_ID Extract the Page ID (the long alphanumeric string after the last dash) Example: In https://www.notion.so/MyPage-1234567890abcdef1234567890abcdef, the Page ID is 1234567890abcdef1234567890abcdef.\nConfiguration # Create a Notion destination configuration:\n$ plakar destination add mynotion location=notion:// token=$NOTION_API_TOKEN Set the target page ID for restoration:\n$ plakar destination set mynotion rootID=$NOTION_PAGE_ID Restore a snapshot:\n$ plakar at /var/backups restore -to \u0026#34;@mynotion\u0026#34; \u0026lt;snapshot_id\u0026gt; Configuration options # Option Required Description location Yes Must be set to notion:// token Yes Notion API token with insert permissions rootID Yes Notion Page ID where content will be restored Current limitations # Permission model: Each top-level page must be manually shared with the integration. Pages not explicitly shared will not be backed up, even if linked from shared pages. Block compatibility: Some third-party or custom Notion blocks may not serialize perfectly. All standard Notion blocks are fully supported. Media restoration: Due to Notion API limitations, media files (images, PDFs, documents) cannot be restored directly. You can restore media to the filesystem and manually re-upload. Restoration target: Restoring requires an existing Notion Page ID as the destination. The API does not support creating new top-level pages. ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/main/integrations/notion/","section":"Docs","summary":"Back up and restore your Notion workspace with Plakar.","title":"Notion","type":"docs"},{"content":" Notion # This integration is currently in beta This integration has not yet reached a stable release. It\u0026rsquo;s functional for testing and evaluation but should not be relied upon for production use cases.\nSee the current limitations section below for current known issues and planned improvements.\nThe Notion integration enables backup and restoration of Notion workspaces through the official Notion API. All workspace content—including pages, databases, blocks, and hierarchical relationships—is captured as structured JSON and stored in a Kloset store.\nThe Notion integration provides two connectors:\nConnector type Description Source connector Back up a Notion workspace into a Kloset store. Destination connector Restore a Notion workspace from a Kloset store. Installation # The Notion package can be installed using pre-built binaries or compiled from source.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Notion package:\n$ plakar pkg add notion Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build notion A package archive will be created in the current directory (e.g., notion_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./notion_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nNotion API Setup # Before using the Plakar Notion integration, you must create an internal integration in your Notion workspace.\nCreate a Notion integration # Go to https://www.notion.so/profile/integrations Click Create a new integration Configure the integration: Name: Choose a descriptive name (e.g., \u0026ldquo;Plakar Backup\u0026rdquo;) Type: Select Internal Associated workspace: Select the workspace you want to back up Click Create to create the integration Click on the Configure integration settings in the success popup Configure integration capabilities # After creating the integration, you need to enable the required capabilities:\nIn the integration settings page, scroll to the Capabilities section Enable the following capabilities: Read content: Required for backing up pages and databases Update content: Required for restoring pages and databases Insert content: Required for restoring pages and databases Read comments: Required for backing up discussion threads Insert comments: Required for restoring discussion threads Click Save Copy the API token # In the integration settings page just before the Capabilities section, there\u0026rsquo;s the token section Click Show on the \u0026ldquo;Internal Integration Secret\u0026rdquo; Copy the token (format: ntn_xxx...) Notion Token Keep this token secure. Anyone with this token can access and modify pages that have been shared with this integration.\nSetup pages access to the integration # You must enable top-level page access to the integration:\nClick on Edit Access Select the top-level pages and databases you want to backup Click Save Sharing Pages Once a page is shared with the integration, all child pages are automatically included during backup. You only need to share top-level pages.\nSource connector # The source connector retrieves Notion workspace data via the API and stores it as structured JSON. This includes page content, databases, blocks, metadata, and hierarchical relationships.\nflowchart LR subgraph Source[\"Notion\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaNotion source connector\"] Transform[\"Transform data as a structured JSON document\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Transform --\u003e Store Requirements # Before configuring the source connector, ensure you have:\nCompleted Notion API setup (see section above) Notion API token from your integration Shared at least one page with your integration Configuration # Create a Notion source configuration:\n$ plakar source add mynotion location=notion:// token=$NOTION_API_TOKEN Back up the workspace to a Kloset store:\n$ plakar at /var/backups backup \u0026#34;@mynotion\u0026#34; Configuration options # Option Required Description location Yes Must be set to notion:// token Yes Your Notion API token (format: ntn_xxx...) What gets backed up # The source connector captures:\nPages: All content, blocks, and page properties Databases: Structure, properties, views, and all entries Media: Images, files, and embedded content (stored as references) Comments: Discussion threads and annotations Metadata: Creation dates, authors, last edited information Relationships: Parent-child hierarchies and database links Destination connector # The destination connector reads structured JSON from a Kloset store and recreates pages and content in Notion via the API.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Transform[\"Reconstruct data from structured JSON document\"] Via[\"Restore data viaNotion destination connector\"] subgraph Destination[\"Notion\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Transform --\u003e Via --\u003e FS Requirements # Before configuring the destination connector, ensure:\nCompleted Notion API setup with insert capabilities enabled Created or identified a target page where content will be restored Shared the target page with your integration Have the Page ID of the target page Finding a Page ID # You need to get the Page ID of the page where you want to restore the backup contents. To find a Notion Page ID:\nOpen the page in Notion Click Share in the top right, then Copy link The URL format is: https://www.notion.so/PageName-PAGE_ID Extract the Page ID (the long alphanumeric string after the last dash) Example: In https://www.notion.so/MyPage-1234567890abcdef1234567890abcdef, the Page ID is 1234567890abcdef1234567890abcdef.\nConfiguration # Create a Notion destination configuration:\n$ plakar destination add mynotion location=notion:// token=$NOTION_API_TOKEN Set the target page ID for restoration:\n$ plakar destination set mynotion rootID=$NOTION_PAGE_ID Restore a snapshot:\n$ plakar at /var/backups restore -to \u0026#34;@mynotion\u0026#34; \u0026lt;snapshot_id\u0026gt; Configuration options # Option Required Description location Yes Must be set to notion:// token Yes Notion API token with insert permissions rootID Yes Notion Page ID where content will be restored Current limitations # Permission model: Each top-level page must be manually shared with the integration. Pages not explicitly shared will not be backed up, even if linked from shared pages. Block compatibility: Some third-party or custom Notion blocks may not serialize perfectly. All standard Notion blocks are fully supported. Media restoration: Due to Notion API limitations, media files (images, PDFs, documents) cannot be restored directly. You can restore media to the filesystem and manually re-upload. Restoration target: Restoring requires an existing Notion Page ID as the destination. The API does not support creating new top-level pages. ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/integrations/notion/","section":"Docs","summary":"Back up and restore your Notion workspace with Plakar.","title":"Notion","type":"docs"},{"content":" Notion # This integration is currently in beta This integration has not yet reached a stable release. It\u0026rsquo;s functional for testing and evaluation but should not be relied upon for production use cases.\nSee the current limitations section below for current known issues and planned improvements.\nThe Notion integration enables backup and restoration of Notion workspaces through the official Notion API. All workspace content—including pages, databases, blocks, and hierarchical relationships—is captured as structured JSON and stored in a Kloset store.\nThe Notion integration provides two connectors:\nConnector type Description Source connector Back up a Notion workspace into a Kloset store. Destination connector Restore a Notion workspace from a Kloset store. Installation # The Notion package can be installed using pre-built binaries or compiled from source.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Notion package:\n$ plakar pkg add notion Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build notion A package archive will be created in the current directory (e.g., notion_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./notion_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nNotion API Setup # Before using the Plakar Notion integration, you must create an internal integration in your Notion workspace.\nCreate a Notion integration # Go to https://www.notion.so/profile/integrations Click Create a new integration Configure the integration: Name: Choose a descriptive name (e.g., \u0026ldquo;Plakar Backup\u0026rdquo;) Type: Select Internal Associated workspace: Select the workspace you want to back up Click Create to create the integration Click on the Configure integration settings in the success popup Configure integration capabilities # After creating the integration, you need to enable the required capabilities:\nIn the integration settings page, scroll to the Capabilities section Enable the following capabilities: Read content: Required for backing up pages and databases Update content: Required for restoring pages and databases Insert content: Required for restoring pages and databases Read comments: Required for backing up discussion threads Insert comments: Required for restoring discussion threads Click Save Copy the API token # In the integration settings page just before the Capabilities section, there\u0026rsquo;s the token section Click Show on the \u0026ldquo;Internal Integration Secret\u0026rdquo; Copy the token (format: ntn_xxx...) Notion Token Keep this token secure. Anyone with this token can access and modify pages that have been shared with this integration.\nSetup pages access to the integration # You must enable top-level page access to the integration:\nClick on Edit Access Select the top-level pages and databases you want to backup Click Save Sharing Pages Once a page is shared with the integration, all child pages are automatically included during backup. You only need to share top-level pages.\nSource connector # The source connector retrieves Notion workspace data via the API and stores it as structured JSON. This includes page content, databases, blocks, metadata, and hierarchical relationships.\nflowchart LR subgraph Source[\"Notion\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaNotion source connector\"] Transform[\"Transform data as a structured JSON document\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Transform --\u003e Store Requirements # Before configuring the source connector, ensure you have:\nCompleted Notion API setup (see section above) Notion API token from your integration Shared at least one page with your integration Configuration # Create a Notion source configuration:\n$ plakar source add mynotion location=notion:// token=$NOTION_API_TOKEN Back up the workspace to a Kloset store:\n$ plakar at /var/backups backup \u0026#34;@mynotion\u0026#34; Configuration options # Option Required Description location Yes Must be set to notion:// token Yes Your Notion API token (format: ntn_xxx...) What gets backed up # The source connector captures:\nPages: All content, blocks, and page properties Databases: Structure, properties, views, and all entries Media: Images, files, and embedded content (stored as references) Comments: Discussion threads and annotations Metadata: Creation dates, authors, last edited information Relationships: Parent-child hierarchies and database links Destination connector # The destination connector reads structured JSON from a Kloset store and recreates pages and content in Notion via the API.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Transform[\"Reconstruct data from structured JSON document\"] Via[\"Restore data viaNotion destination connector\"] subgraph Destination[\"Notion\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Transform --\u003e Via --\u003e FS Requirements # Before configuring the destination connector, ensure:\nCompleted Notion API setup with insert capabilities enabled Created or identified a target page where content will be restored Shared the target page with your integration Have the Page ID of the target page Finding a Page ID # You need to get the Page ID of the page where you want to restore the backup contents. To find a Notion Page ID:\nOpen the page in Notion Click Share in the top right, then Copy link The URL format is: https://www.notion.so/PageName-PAGE_ID Extract the Page ID (the long alphanumeric string after the last dash) Example: In https://www.notion.so/MyPage-1234567890abcdef1234567890abcdef, the Page ID is 1234567890abcdef1234567890abcdef.\nConfiguration # Create a Notion destination configuration:\n$ plakar destination add mynotion location=notion:// token=$NOTION_API_TOKEN Set the target page ID for restoration:\n$ plakar destination set mynotion rootID=$NOTION_PAGE_ID Restore a snapshot:\n$ plakar at /var/backups restore -to \u0026#34;@mynotion\u0026#34; \u0026lt;snapshot_id\u0026gt; Configuration options # Option Required Description location Yes Must be set to notion:// token Yes Notion API token with insert permissions rootID Yes Notion Page ID where content will be restored Current limitations # Permission model: Each top-level page must be manually shared with the integration. Pages not explicitly shared will not be backed up, even if linked from shared pages. Block compatibility: Some third-party or custom Notion blocks may not serialize perfectly. All standard Notion blocks are fully supported. Media restoration: Due to Notion API limitations, media files (images, PDFs, documents) cannot be restored directly. You can restore media to the filesystem and manually re-upload. Restoration target: Restoring requires an existing Notion Page ID as the destination. The API does not support creating new top-level pages. ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/integrations/notion/","section":"Docs","summary":"Back up and restore your Notion workspace with Plakar.","title":"Notion","type":"docs"},{"content":" Notion # This integration is currently in beta This integration has not yet reached a stable release. It\u0026rsquo;s functional for testing and evaluation but should not be relied upon for production use cases.\nSee the current limitations section below for current known issues and planned improvements.\nThe Notion integration enables backup and restoration of Notion workspaces through the official Notion API. All workspace content—including pages, databases, blocks, and hierarchical relationships—is captured as structured JSON and stored in a Kloset store.\nThe Notion integration provides two connectors:\nConnector type Description Source connector Back up a Notion workspace into a Kloset store. Destination connector Restore a Notion workspace from a Kloset store. Installation # The Notion package can be installed using pre-built binaries or compiled from source.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Notion package:\n$ plakar pkg add notion Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build notion A package archive will be created in the current directory (e.g., notion_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./notion_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nNotion API Setup # Before using the Plakar Notion integration, you must create an internal integration in your Notion workspace.\nCreate a Notion integration # Go to https://www.notion.so/profile/integrations Click Create a new integration Configure the integration: Name: Choose a descriptive name (e.g., \u0026ldquo;Plakar Backup\u0026rdquo;) Type: Select Internal Associated workspace: Select the workspace you want to back up Click Create to create the integration Click on the Configure integration settings in the success popup Configure integration capabilities # After creating the integration, you need to enable the required capabilities:\nIn the integration settings page, scroll to the Capabilities section Enable the following capabilities: Read content: Required for backing up pages and databases Update content: Required for restoring pages and databases Insert content: Required for restoring pages and databases Read comments: Required for backing up discussion threads Insert comments: Required for restoring discussion threads Click Save Copy the API token # In the integration settings page just before the Capabilities section, there\u0026rsquo;s the token section Click Show on the \u0026ldquo;Internal Integration Secret\u0026rdquo; Copy the token (format: ntn_xxx...) Notion Token Keep this token secure. Anyone with this token can access and modify pages that have been shared with this integration.\nSetup pages access to the integration # You must enable top-level page access to the integration:\nClick on Edit Access Select the top-level pages and databases you want to backup Click Save Sharing Pages Once a page is shared with the integration, all child pages are automatically included during backup. You only need to share top-level pages.\nSource connector # The source connector retrieves Notion workspace data via the API and stores it as structured JSON. This includes page content, databases, blocks, metadata, and hierarchical relationships.\nflowchart LR subgraph Source[\"Notion\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaNotion source connector\"] Transform[\"Transform data as a structured JSON document\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Transform --\u003e Store Requirements # Before configuring the source connector, ensure you have:\nCompleted Notion API setup (see section above) Notion API token from your integration Shared at least one page with your integration Configuration # Create a Notion source configuration:\n$ plakar source add mynotion location=notion:// token=$NOTION_API_TOKEN Back up the workspace to a Kloset store:\n$ plakar at /var/backups backup \u0026#34;@mynotion\u0026#34; Configuration options # Option Required Description location Yes Must be set to notion:// token Yes Your Notion API token (format: ntn_xxx...) What gets backed up # The source connector captures:\nPages: All content, blocks, and page properties Databases: Structure, properties, views, and all entries Media: Images, files, and embedded content (stored as references) Comments: Discussion threads and annotations Metadata: Creation dates, authors, last edited information Relationships: Parent-child hierarchies and database links Destination connector # The destination connector reads structured JSON from a Kloset store and recreates pages and content in Notion via the API.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Transform[\"Reconstruct data from structured JSON document\"] Via[\"Restore data viaNotion destination connector\"] subgraph Destination[\"Notion\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Transform --\u003e Via --\u003e FS Requirements # Before configuring the destination connector, ensure:\nCompleted Notion API setup with insert capabilities enabled Created or identified a target page where content will be restored Shared the target page with your integration Have the Page ID of the target page Finding a Page ID # You need to get the Page ID of the page where you want to restore the backup contents. To find a Notion Page ID:\nOpen the page in Notion Click Share in the top right, then Copy link The URL format is: https://www.notion.so/PageName-PAGE_ID Extract the Page ID (the long alphanumeric string after the last dash) Example: In https://www.notion.so/MyPage-1234567890abcdef1234567890abcdef, the Page ID is 1234567890abcdef1234567890abcdef.\nConfiguration # Create a Notion destination configuration:\n$ plakar destination add mynotion location=notion:// token=$NOTION_API_TOKEN Set the target page ID for restoration:\n$ plakar destination set mynotion rootID=$NOTION_PAGE_ID Restore a snapshot:\n$ plakar at /var/backups restore -to \u0026#34;@mynotion\u0026#34; \u0026lt;snapshot_id\u0026gt; Configuration options # Option Required Description location Yes Must be set to notion:// token Yes Notion API token with insert permissions rootID Yes Notion Page ID where content will be restored Current limitations # Permission model: Each top-level page must be manually shared with the integration. Pages not explicitly shared will not be backed up, even if linked from shared pages. Block compatibility: Some third-party or custom Notion blocks may not serialize perfectly. All standard Notion blocks are fully supported. Media restoration: Due to Notion API limitations, media files (images, PDFs, documents) cannot be restored directly. You can restore media to the filesystem and manually re-upload. Restoration target: Restoring requires an existing Notion Page ID as the destination. The API does not support creating new top-level pages. ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/integrations/notion/","section":"Docs","summary":"Back up and restore your Notion workspace with Plakar.","title":"Notion","type":"docs"},{"content":" Go Kloset SDK # The Go Kloset SDK enables building Plakar integrations as standalone plugins. Plugins communicate with Plakar over gRPC through stdin/stdout and can provide:\nImporters - Read data from sources (used during backup) Exporters - Write data to destinations (used during restore) Storage - Custom storage backends for repositories Installation # $ go get github.com/PlakarKorp/go-kloset-sdk Entry Points # EntrypointImporter # func EntrypointImporter(args []string, constructor ImporterConstructor) Entry point for importer plugins. Call from main().\nExample:\npackage main import ( \u0026#34;os\u0026#34; sdk \u0026#34;github.com/PlakarKorp/go-kloset-sdk\u0026#34; connector \u0026#34;github.com/yourorg/integration\u0026#34; ) func main() { sdk.EntrypointImporter(os.Args, connector.NewImporter) } EntrypointExporter # func EntrypointExporter(args []string, constructor ExporterConstructor) Entry point for exporter plugins.\nExample:\npackage main import ( \u0026#34;os\u0026#34; sdk \u0026#34;github.com/PlakarKorp/go-kloset-sdk\u0026#34; connector \u0026#34;github.com/yourorg/integration\u0026#34; ) func main() { sdk.EntrypointExporter(os.Args, connector.NewExporter) } EntrypointStorage # func EntrypointStorage(args []string, constructor StoreConstructor) Entry point for storage backend plugins.\nExample:\npackage main import ( \u0026#34;os\u0026#34; sdk \u0026#34;github.com/PlakarKorp/go-kloset-sdk\u0026#34; connector \u0026#34;github.com/yourorg/integration\u0026#34; ) func main() { sdk.EntrypointStorage(os.Args, connector.NewStore) } Importer Interface # Registration # func Register(protocol string, flags location.Flags, constructor Constructor) Register an importer. Call in init().\nExample:\nfunc init() { importer.Register(\u0026#34;myprotocol\u0026#34;, 0, NewImporter) } Constructor # type Constructor func( ctx context.Context, opts *connectors.Options, proto string, config map[string]string, ) (Importer, error) Parameters:\nctx - Context for cancellation opts - Configuration options (excludes, hostname, max concurrency) proto - Protocol name config - Configuration map with location and other parameters Interface Methods # type Importer interface { Root() string Origin() string Type() string Flags() location.Flags Ping(ctx context.Context) error Import(ctx context.Context, records chan\u0026lt;- *connectors.Record, results \u0026lt;-chan *connectors.Result) error Close(ctx context.Context) error } Root # Returns the root path being imported.\nOrigin # Returns the origin/source identifier (e.g., hostname, bucket name).\nType # Returns the protocol name.\nFlags # Returns location flags describing characteristics:\n0 - Remote/network sources location.FLAG_LOCALFS - Local filesystem location.FLAG_STREAM - Single-use import (disables progress bar) location.FLAG_NEEDACK - Reads from results channel Combine with bitwise OR: location.FLAG_LOCALFS | location.FLAG_STREAM\nPing # Tests source connectivity before import begins.\nImport # Main import function. Sends file records through channel.\nImportant:\nAlways defer close(records) at start Ignore results unless FLAG_NEEDACK is set Example:\nfunc (i *MyImporter) Import(ctx context.Context, records chan\u0026lt;- *connectors.Record, results \u0026lt;-chan *connectors.Result) error { defer close(records) info, _ := os.Stat(path) fi := objects.FileInfo{ Lname: filepath.Base(path), Lsize: info.Size(), Lmode: info.Mode(), LmodTime: info.ModTime(), Ldev: 1, } records \u0026lt;- connectors.NewRecord(path, \u0026#34;\u0026#34;, fi, nil, func() (io.ReadCloser, error) { return os.Open(path) }) return nil } Close # Cleanup function called after import completes.\nExporter Interface # Registration # func Register(protocol string, flags location.Flags, constructor Constructor) Register an exporter. Call in init().\nConstructor # type Constructor func( ctx context.Context, opts *connectors.Options, proto string, config map[string]string, ) (Exporter, error) Interface Methods # type Exporter interface { Root() string Origin() string Type() string Flags() location.Flags Ping(ctx context.Context) error Export(ctx context.Context, records \u0026lt;-chan *connectors.Record, results chan\u0026lt;- *connectors.Result) error Close(ctx context.Context) error } Methods are identical to Importer except for Export().\nExport # Receives records from channel and processes them.\nImportant:\nAlways defer close(results) at start Send result for each record: record.Ok() or record.Error(err) record.Ok() and record.Error() close the reader automatically Example:\nfunc (e *MyExporter) Export(ctx context.Context, records \u0026lt;-chan *connectors.Record, results chan\u0026lt;- *connectors.Result) error { defer close(results) for record := range records { if record.Reader != nil { // Process record content io.Copy(destination, record.Reader) } results \u0026lt;- record.Ok() } return nil } Storage Interface # Registration # func Register(protocol string, flags location.Flags, constructor Constructor) Register a storage backend. Call in init().\nConstructor # type Constructor func( ctx context.Context, proto string, config map[string]string, ) (Store, error) Note: Storage constructor does not receive *connectors.Options.\nInterface Methods # type Store interface { Create(ctx context.Context, config []byte) error Open(ctx context.Context) ([]byte, error) Ping(ctx context.Context) error Origin() string Type() string Root() string Flags() location.Flags Mode(ctx context.Context) (Mode, error) Size(ctx context.Context) (int64, error) List(ctx context.Context, StorageResource) ([]objects.MAC, error) Put(ctx context.Context, StorageResource, objects.MAC, io.Reader) (int64, error) Get(ctx context.Context, StorageResource, objects.MAC, *Range) (io.ReadCloser, error) Delete(ctx context.Context, StorageResource, objects.MAC) error Close(ctx context.Context) error } Create # Initializes a new repository with configuration data.\nOpen # Opens an existing repository and returns its configuration.\nMode # Returns storage capabilities: storage.ModeRead | storage.ModeWrite\nSize # Returns total storage size in bytes.\nList # Lists objects of a given resource type:\nstorage.StorageResourcePackfile storage.StorageResourceState storage.StorageResourceLock Put # Stores an object identified by MAC (Message Authentication Code).\nGet # Retrieves an object by MAC. Optional range parameter for partial reads.\nDelete # Removes an object by MAC.\nHelper Types # connectors.Options # type Options struct { Excludes []string Hostname string MaxConcurrency int } connectors.Record # func NewRecord( pathname string, target string, fi objects.FileInfo, xattrs map[string][]byte, contentReader func() (io.ReadCloser, error), ) *Record Creates a file record with lazy-loading content reader.\nParameters:\npathname - Full file path target - Symlink target (empty for regular files) fi - File metadata xattrs - Extended attributes (can be nil) contentReader - Function to open file content objects.FileInfo # type FileInfo struct { Lname string Lsize int64 Lmode fs.FileMode LmodTime time.Time Ldev uint64 } Important Notes # Do Not Write to Stdout # Plugins communicate via gRPC over stdin/stdout. Writing to os.Stdout corrupts the stream. Always use os.Stderr for logging:\nfmt.Fprintf(os.Stderr, \u0026#34;debug: %s\\n\u0026#34;, msg) Location Flags # Set flags in both code (registration and Flags() method) and manifest:\nFlag Manifest Description location.FLAG_LOCALFS localfs Local filesystem paths location.FLAG_FILE file Single-file storage location.FLAG_STREAM stream Single-use import location.FLAG_NEEDACK needack Reads results channel Complete Example # See the integration example repository for an integration implementation with importer, exporter, and storage connectors.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/references/sdk/","section":"Docs","summary":"Go SDK reference for building Plakar integrations.","title":"Go Kloset SDK","type":"docs"},{"content":" Go Kloset SDK # The Go Kloset SDK enables building Plakar integrations as standalone plugins. Plugins communicate with Plakar over gRPC through stdin/stdout and can provide:\nImporters - Read data from sources (used during backup) Exporters - Write data to destinations (used during restore) Storage - Custom storage backends for repositories Installation # $ go get github.com/PlakarKorp/go-kloset-sdk Entry Points # EntrypointImporter # func EntrypointImporter(args []string, constructor ImporterConstructor) Entry point for importer plugins. Call from main().\nExample:\npackage main import ( \u0026#34;os\u0026#34; sdk \u0026#34;github.com/PlakarKorp/go-kloset-sdk\u0026#34; connector \u0026#34;github.com/yourorg/integration\u0026#34; ) func main() { sdk.EntrypointImporter(os.Args, connector.NewImporter) } EntrypointExporter # func EntrypointExporter(args []string, constructor ExporterConstructor) Entry point for exporter plugins.\nExample:\npackage main import ( \u0026#34;os\u0026#34; sdk \u0026#34;github.com/PlakarKorp/go-kloset-sdk\u0026#34; connector \u0026#34;github.com/yourorg/integration\u0026#34; ) func main() { sdk.EntrypointExporter(os.Args, connector.NewExporter) } EntrypointStorage # func EntrypointStorage(args []string, constructor StoreConstructor) Entry point for storage backend plugins.\nExample:\npackage main import ( \u0026#34;os\u0026#34; sdk \u0026#34;github.com/PlakarKorp/go-kloset-sdk\u0026#34; connector \u0026#34;github.com/yourorg/integration\u0026#34; ) func main() { sdk.EntrypointStorage(os.Args, connector.NewStore) } Importer Interface # Registration # func Register(protocol string, flags location.Flags, constructor Constructor) Register an importer. Call in init().\nExample:\nfunc init() { importer.Register(\u0026#34;myprotocol\u0026#34;, 0, NewImporter) } Constructor # type Constructor func( ctx context.Context, opts *connectors.Options, proto string, config map[string]string, ) (Importer, error) Parameters:\nctx - Context for cancellation opts - Configuration options (excludes, hostname, max concurrency) proto - Protocol name config - Configuration map with location and other parameters Interface Methods # type Importer interface { Root() string Origin() string Type() string Flags() location.Flags Ping(ctx context.Context) error Import(ctx context.Context, records chan\u0026lt;- *connectors.Record, results \u0026lt;-chan *connectors.Result) error Close(ctx context.Context) error } Root # Returns the root path being imported.\nOrigin # Returns the origin/source identifier (e.g., hostname, bucket name).\nType # Returns the protocol name.\nFlags # Returns location flags describing characteristics:\n0 - Remote/network sources location.FLAG_LOCALFS - Local filesystem location.FLAG_STREAM - Single-use import (disables progress bar) location.FLAG_NEEDACK - Reads from results channel Combine with bitwise OR: location.FLAG_LOCALFS | location.FLAG_STREAM\nPing # Tests source connectivity before import begins.\nImport # Main import function. Sends file records through channel.\nImportant:\nAlways defer close(records) at start Ignore results unless FLAG_NEEDACK is set Example:\nfunc (i *MyImporter) Import(ctx context.Context, records chan\u0026lt;- *connectors.Record, results \u0026lt;-chan *connectors.Result) error { defer close(records) info, _ := os.Stat(path) fi := objects.FileInfo{ Lname: filepath.Base(path), Lsize: info.Size(), Lmode: info.Mode(), LmodTime: info.ModTime(), Ldev: 1, } records \u0026lt;- connectors.NewRecord(path, \u0026#34;\u0026#34;, fi, nil, func() (io.ReadCloser, error) { return os.Open(path) }) return nil } Close # Cleanup function called after import completes.\nExporter Interface # Registration # func Register(protocol string, flags location.Flags, constructor Constructor) Register an exporter. Call in init().\nConstructor # type Constructor func( ctx context.Context, opts *connectors.Options, proto string, config map[string]string, ) (Exporter, error) Interface Methods # type Exporter interface { Root() string Origin() string Type() string Flags() location.Flags Ping(ctx context.Context) error Export(ctx context.Context, records \u0026lt;-chan *connectors.Record, results chan\u0026lt;- *connectors.Result) error Close(ctx context.Context) error } Methods are identical to Importer except for Export().\nExport # Receives records from channel and processes them.\nImportant:\nAlways defer close(results) at start Send result for each record: record.Ok() or record.Error(err) record.Ok() and record.Error() close the reader automatically Example:\nfunc (e *MyExporter) Export(ctx context.Context, records \u0026lt;-chan *connectors.Record, results chan\u0026lt;- *connectors.Result) error { defer close(results) for record := range records { if record.Reader != nil { // Process record content io.Copy(destination, record.Reader) } results \u0026lt;- record.Ok() } return nil } Storage Interface # Registration # func Register(protocol string, flags location.Flags, constructor Constructor) Register a storage backend. Call in init().\nConstructor # type Constructor func( ctx context.Context, proto string, config map[string]string, ) (Store, error) Note: Storage constructor does not receive *connectors.Options.\nInterface Methods # type Store interface { Create(ctx context.Context, config []byte) error Open(ctx context.Context) ([]byte, error) Ping(ctx context.Context) error Origin() string Type() string Root() string Flags() location.Flags Mode(ctx context.Context) (Mode, error) Size(ctx context.Context) (int64, error) List(ctx context.Context, StorageResource) ([]objects.MAC, error) Put(ctx context.Context, StorageResource, objects.MAC, io.Reader) (int64, error) Get(ctx context.Context, StorageResource, objects.MAC, *Range) (io.ReadCloser, error) Delete(ctx context.Context, StorageResource, objects.MAC) error Close(ctx context.Context) error } Create # Initializes a new repository with configuration data.\nOpen # Opens an existing repository and returns its configuration.\nMode # Returns storage capabilities: storage.ModeRead | storage.ModeWrite\nSize # Returns total storage size in bytes.\nList # Lists objects of a given resource type:\nstorage.StorageResourcePackfile storage.StorageResourceState storage.StorageResourceLock Put # Stores an object identified by MAC (Message Authentication Code).\nGet # Retrieves an object by MAC. Optional range parameter for partial reads.\nDelete # Removes an object by MAC.\nHelper Types # connectors.Options # type Options struct { Excludes []string Hostname string MaxConcurrency int } connectors.Record # func NewRecord( pathname string, target string, fi objects.FileInfo, xattrs map[string][]byte, contentReader func() (io.ReadCloser, error), ) *Record Creates a file record with lazy-loading content reader.\nParameters:\npathname - Full file path target - Symlink target (empty for regular files) fi - File metadata xattrs - Extended attributes (can be nil) contentReader - Function to open file content objects.FileInfo # type FileInfo struct { Lname string Lsize int64 Lmode fs.FileMode LmodTime time.Time Ldev uint64 } Important Notes # Do Not Write to Stdout # Plugins communicate via gRPC over stdin/stdout. Writing to os.Stdout corrupts the stream. Always use os.Stderr for logging:\nfmt.Fprintf(os.Stderr, \u0026#34;debug: %s\\n\u0026#34;, msg) Location Flags # Set flags in both code (registration and Flags() method) and manifest:\nFlag Manifest Description location.FLAG_LOCALFS localfs Local filesystem paths location.FLAG_FILE file Single-file storage location.FLAG_STREAM stream Single-use import location.FLAG_NEEDACK needack Reads results channel Complete Example # See the integration example repository for an integration implementation with importer, exporter, and storage connectors.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/sdk/","section":"Docs","summary":"Go SDK reference for building Plakar integrations.","title":"Go Kloset SDK","type":"docs"},{"content":" How many Kloset Stores should you create # A common design question when setting up backups with Plakar is how many Kloset Stores to create.\nShould you use:\nA single store for everything Separate stores for servers, SaaS data, or cloud buckets One store per system? There is no universal answer. The right choice depends on how your data is structured and how you want to manage it.\nThe key idea: Kloset Stores are deduplication boundaries # You can view a Kloset Store as a deduplication unit. Data is deduplicated within a store, but never across stores. This means the number of stores you create directly affects:\nStorage efficiency Encryption boundaries Operational complexity Understanding how similar your data is, matters more than how many sources you have.\nWhen a single Kloset Store makes sense # Using one store is often the simplest option.\nThis works well when:\nBackup sizes are relatively small Data across sources is largely similar You want minimal operational overhead Example: Similar data across many servers # Imagine 10 servers, each with 100 GB of data. Most of that data is identical: operating systems, shared libraries, common applications.\nBy storing all backups in a single Kloset Store, Plakar can deduplicate the shared data. Instead of storing 1 TB, only the unique portions are kept.\nThis approach maximizes deduplication and keeps management simple.\nThese numbers are illustrative and do not account for compression.\nWhen multiple Kloset Stores are better # Multiple stores are often preferable when data sets have little or no overlap.\nExample: Independent data sets # Consider 10 S3 buckets, each containing 100 GB of unrelated data.\nBecause there is no meaningful overlap, a single Kloset Store would provide little deduplication benefit. In this case, separating data into multiple stores can simplify management without increasing storage usage.\nSeparating stores for security or policy reasons # Deduplication is not the only reason to create multiple stores.\nYou may also want separation when:\nDifferent data sets require different encryption keys Access policies differ Data has different retention or compliance requirements Example: Same data, different trust boundaries # You might store internal backups and external customer backups separately, even if the data structure is similar, so each store can use a different encryption key.\nSmall data sets and simplicity # For many small backups (configuration files, small databases, metadata), the deduplication benefit may be minimal regardless of layout.\nIn these cases, using a single Kloset Store is often still the right choice simply because it is easier to operate.\nSummary # When deciding how many Kloset Stores to create, consider:\nHow similar your data sets are Whether deduplication efficiency matters Whether data needs to be isolated for security or policy reasons How much operational complexity you are willing to manage In practice, many environments start with a single store and introduce additional stores only when a clear need appears.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/explanations/how-many-kloset-stores/","section":"Docs","summary":"Understand how deduplication, data similarity, and security requirements affect the number of Kloset Stores you should use.","title":"How many Kloset Stores should you create","type":"docs"},{"content":" How many Kloset Stores should you create # A common design question when setting up backups with Plakar is how many Kloset Stores to create.\nShould you use:\nA single store for everything Separate stores for servers, SaaS data, or cloud buckets One store per system? There is no universal answer. The right choice depends on how your data is structured and how you want to manage it.\nThe key idea: Kloset Stores are deduplication boundaries # You can view a Kloset Store as a deduplication unit. Data is deduplicated within a store, but never across stores. This means the number of stores you create directly affects:\nStorage efficiency Encryption boundaries Operational complexity Understanding how similar your data is, matters more than how many sources you have.\nWhen a single Kloset Store makes sense # Using one store is often the simplest option.\nThis works well when:\nBackup sizes are relatively small Data across sources is largely similar You want minimal operational overhead Example: Similar data across many servers # Imagine 10 servers, each with 100 GB of data. Most of that data is identical: operating systems, shared libraries, common applications.\nBy storing all backups in a single Kloset Store, Plakar can deduplicate the shared data. Instead of storing 1 TB, only the unique portions are kept.\nThis approach maximizes deduplication and keeps management simple.\nThese numbers are illustrative and do not account for compression.\nWhen multiple Kloset Stores are better # Multiple stores are often preferable when data sets have little or no overlap.\nExample: Independent data sets # Consider 10 S3 buckets, each containing 100 GB of unrelated data.\nBecause there is no meaningful overlap, a single Kloset Store would provide little deduplication benefit. In this case, separating data into multiple stores can simplify management without increasing storage usage.\nSeparating stores for security or policy reasons # Deduplication is not the only reason to create multiple stores.\nYou may also want separation when:\nDifferent data sets require different encryption keys Access policies differ Data has different retention or compliance requirements Example: Same data, different trust boundaries # You might store internal backups and external customer backups separately, even if the data structure is similar, so each store can use a different encryption key.\nSmall data sets and simplicity # For many small backups (configuration files, small databases, metadata), the deduplication benefit may be minimal regardless of layout.\nIn these cases, using a single Kloset Store is often still the right choice simply because it is easier to operate.\nSummary # When deciding how many Kloset Stores to create, consider:\nHow similar your data sets are Whether deduplication efficiency matters Whether data needs to be isolated for security or policy reasons How much operational complexity you are willing to manage In practice, many environments start with a single store and introduce additional stores only when a clear need appears.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/explanations/how-many-kloset-stores/","section":"Docs","summary":"Understand how deduplication, data similarity, and security requirements affect the number of Kloset Stores you should use.","title":"How many Kloset Stores should you create","type":"docs"},{"content":" How many Kloset Stores should you create # A common design question when setting up backups with Plakar is how many Kloset Stores to create.\nShould you use:\nA single store for everything Separate stores for servers, SaaS data, or cloud buckets One store per system? There is no universal answer. The right choice depends on how your data is structured and how you want to manage it.\nThe key idea: Kloset Stores are deduplication boundaries # You can view a Kloset Store as a deduplication unit. Data is deduplicated within a store, but never across stores. This means the number of stores you create directly affects:\nStorage efficiency Encryption boundaries Operational complexity Understanding how similar your data is, matters more than how many sources you have.\nWhen a single Kloset Store makes sense # Using one store is often the simplest option.\nThis works well when:\nBackup sizes are relatively small Data across sources is largely similar You want minimal operational overhead Example: Similar data across many servers # Imagine 10 servers, each with 100 GB of data. Most of that data is identical: operating systems, shared libraries, common applications.\nBy storing all backups in a single Kloset Store, Plakar can deduplicate the shared data. Instead of storing 1 TB, only the unique portions are kept.\nThis approach maximizes deduplication and keeps management simple.\nThese numbers are illustrative and do not account for compression.\nWhen multiple Kloset Stores are better # Multiple stores are often preferable when data sets have little or no overlap.\nExample: Independent data sets # Consider 10 S3 buckets, each containing 100 GB of unrelated data.\nBecause there is no meaningful overlap, a single Kloset Store would provide little deduplication benefit. In this case, separating data into multiple stores can simplify management without increasing storage usage.\nSeparating stores for security or policy reasons # Deduplication is not the only reason to create multiple stores.\nYou may also want separation when:\nDifferent data sets require different encryption keys Access policies differ Data has different retention or compliance requirements Example: Same data, different trust boundaries # You might store internal backups and external customer backups separately, even if the data structure is similar, so each store can use a different encryption key.\nSmall data sets and simplicity # For many small backups (configuration files, small databases, metadata), the deduplication benefit may be minimal regardless of layout.\nIn these cases, using a single Kloset Store is often still the right choice simply because it is easier to operate.\nSummary # When deciding how many Kloset Stores to create, consider:\nHow similar your data sets are Whether deduplication efficiency matters Whether data needs to be isolated for security or policy reasons How much operational complexity you are willing to manage In practice, many environments start with a single store and introduce additional stores only when a clear need appears.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/explanations/how-many-kloset-stores/","section":"Docs","summary":"Understand how deduplication, data similarity, and security requirements affect the number of Kloset Stores you should use.","title":"How many Kloset Stores should you create","type":"docs"},{"content":" How many Kloset Stores should you create # A common design question when setting up backups with Plakar is how many Kloset Stores to create.\nShould you use:\nA single store for everything Separate stores for servers, SaaS data, or cloud buckets One store per system? There is no universal answer. The right choice depends on how your data is structured and how you want to manage it.\nThe key idea: Kloset Stores are deduplication boundaries # You can view a Kloset Store as a deduplication unit. Data is deduplicated within a store, but never across stores. This means the number of stores you create directly affects:\nStorage efficiency Encryption boundaries Operational complexity Understanding how similar your data is, matters more than how many sources you have.\nWhen a single Kloset Store makes sense # Using one store is often the simplest option.\nThis works well when:\nBackup sizes are relatively small Data across sources is largely similar You want minimal operational overhead Example: Similar data across many servers # Imagine 10 servers, each with 100 GB of data. Most of that data is identical: operating systems, shared libraries, common applications.\nBy storing all backups in a single Kloset Store, Plakar can deduplicate the shared data. Instead of storing 1 TB, only the unique portions are kept.\nThis approach maximizes deduplication and keeps management simple.\nThese numbers are illustrative and do not account for compression.\nWhen multiple Kloset Stores are better # Multiple stores are often preferable when data sets have little or no overlap.\nExample: Independent data sets # Consider 10 S3 buckets, each containing 100 GB of unrelated data.\nBecause there is no meaningful overlap, a single Kloset Store would provide little deduplication benefit. In this case, separating data into multiple stores can simplify management without increasing storage usage.\nSeparating stores for security or policy reasons # Deduplication is not the only reason to create multiple stores.\nYou may also want separation when:\nDifferent data sets require different encryption keys Access policies differ Data has different retention or compliance requirements Example: Same data, different trust boundaries # You might store internal backups and external customer backups separately, even if the data structure is similar, so each store can use a different encryption key.\nSmall data sets and simplicity # For many small backups (configuration files, small databases, metadata), the deduplication benefit may be minimal regardless of layout.\nIn these cases, using a single Kloset Store is often still the right choice simply because it is easier to operate.\nSummary # When deciding how many Kloset Stores to create, consider:\nHow similar your data sets are Whether deduplication efficiency matters Whether data needs to be isolated for security or policy reasons How much operational complexity you are willing to manage In practice, many environments start with a single store and introduce additional stores only when a clear need appears.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/explanations/how-many-kloset-stores/","section":"Docs","summary":"Understand how deduplication, data similarity, and security requirements affect the number of Kloset Stores you should use.","title":"How many Kloset Stores should you create","type":"docs"},{"content":" Integrations # Plakar can be extended through integrations that enable storing backups or backing up and restoring data from external services. Each integration may act as a store, a source, a destination, or any combination of these roles, depending on its capabilities.\nBelow is a list of links to the documentation for each supported integration.\nS3 Back up and restore S3 buckets with Plakar.\nSFTP / SSH Back up and restore remote directories over SFTP/SSH, and host Kloset stores on remote SFTP servers.\nNotion Back up and restore your Notion workspace with Plakar.\nDropbox Back up and restore your Dropbox with Plakar, and host Kloset stores in Dropbox.\niCloud Drive Back up and restore your iCloud Drive with Plakar, and host Kloset stores in iCloud Drive.\nKoofr Back up and restore your Koofr with Plakar, and host Kloset stores in Koofr.\nGoogle Drive Back up and restore your Google Drive with Plakar, and host Kloset stores in Google Drive.\nOneDrive Back up and restore your OneDrive with Plakar, and host Kloset stores in OneDrive.\nOpenDrive Back up and restore OpenDrive data with Plakar, and host Kloset stores in OpenDrive.\nProton Drive Back up and restore your Proton Drive with Plakar, and host Kloset stores in Proton Drive.\nProxmox Back up and restore Proxmox virtual machines and containers with Plakar.\nKubernetes Back up and restore Kubernetes resources and persistent volumes with Plakar.\netcd Back up etcd clusters with Plakar.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/integrations/","section":"Docs","summary":"","title":"Integrations","type":"docs"},{"content":" Integrations # Plakar can be extended through integrations that enable storing backups or backing up and restoring data from external services. Each integration may act as a store, a source, a destination, or any combination of these roles, depending on its capabilities.\nBelow is a list of links to the documentation for each supported integration.\nS3 Back up and restore S3 buckets with Plakar.\nSFTP / SSH Back up and restore remote directories over SFTP/SSH, and host Kloset stores on remote SFTP servers.\nNotion Back up and restore your Notion workspace with Plakar.\nDropbox Back up and restore your Dropbox with Plakar, and host Kloset stores in Dropbox.\niCloud Drive Back up and restore your iCloud Drive with Plakar, and host Kloset stores in iCloud Drive.\nKoofr Back up and restore your Koofr with Plakar, and host Kloset stores in Koofr.\nGoogle Drive Back up and restore your Google Drive with Plakar, and host Kloset stores in Google Drive.\nOneDrive Back up and restore your OneDrive with Plakar, and host Kloset stores in OneDrive.\nOpenDrive Back up and restore OpenDrive data with Plakar, and host Kloset stores in OpenDrive.\nProton Drive Back up and restore your Proton Drive with Plakar, and host Kloset stores in Proton Drive.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/integrations/","section":"Docs","summary":"","title":"Integrations","type":"docs"},{"content":" Integrations # Plakar can be extended through integrations that enable storing backups or backing up and restoring data from external services. Each integration may act as a store, a source, a destination, or any combination of these roles, depending on its capabilities.\nBelow is a list of links to the documentation for each supported integration.\nS3 Back up and restore S3 buckets with Plakar.\nSFTP / SSH Back up and restore remote directories over SFTP/SSH, and host Kloset stores on remote SFTP servers.\nNotion Back up and restore your Notion workspace with Plakar.\nDropbox Back up and restore your Dropbox with Plakar, and host Kloset stores in Dropbox.\niCloud Drive Back up and restore your iCloud Drive with Plakar, and host Kloset stores in iCloud Drive.\nKoofr Back up and restore your Koofr with Plakar, and host Kloset stores in Koofr.\nGoogle Drive Back up and restore your Google Drive with Plakar, and host Kloset stores in Google Drive.\nOneDrive Back up and restore your OneDrive with Plakar, and host Kloset stores in OneDrive.\nOpenDrive Back up and restore OpenDrive data with Plakar, and host Kloset stores in OpenDrive.\nProton Drive Back up and restore your Proton Drive with Plakar, and host Kloset stores in Proton Drive.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/integrations/","section":"Docs","summary":"","title":"Integrations","type":"docs"},{"content":" Integrations # Plakar can be extended through integrations that enable storing backups or backing up and restoring data from external services. Each integration may act as a store, a source, a destination, or any combination of these roles, depending on its capabilities.\nBelow is a list of links to the documentation for each supported integration.\nS3 Back up and restore S3 buckets with Plakar.\nSFTP / SSH Back up and restore remote directories over SFTP/SSH, and host Kloset stores on remote SFTP servers.\nNotion Back up and restore your Notion workspace with Plakar.\nDropbox Back up and restore your Dropbox with Plakar, and host Kloset stores in Dropbox.\niCloud Drive Back up and restore your iCloud Drive with Plakar, and host Kloset stores in iCloud Drive.\nKoofr Back up and restore your Koofr with Plakar, and host Kloset stores in Koofr.\nGoogle Drive Back up and restore your Google Drive with Plakar, and host Kloset stores in Google Drive.\nOneDrive Back up and restore your OneDrive with Plakar, and host Kloset stores in OneDrive.\nOpenDrive Back up and restore OpenDrive data with Plakar, and host Kloset stores in OpenDrive.\nProton Drive Back up and restore your Proton Drive with Plakar, and host Kloset stores in Proton Drive.\nProxmox Back up and restore Proxmox virtual machines and containers with Plakar.\nKubernetes Back up and restore Kubernetes resources and persistent volumes with Plakar.\netcd Back up etcd clusters with Plakar.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/integrations/","section":"Docs","summary":"","title":"Integrations","type":"docs"},{"content":" Creating a Kloset Store # A Kloset store is Plakar\u0026rsquo;s immutable storage backend for backup data. This guide covers filesystem-based store creation. You can learn more in the Kloset deep dive article\nWhy you need a Kloset store # Before you can run any backup, you\u0026rsquo;ll need to create a Kloset store to store the data. It can be hosted anywhere that Plakar has an integration with a storage connector for e.g a local filesystem path, a remote S3 bucket, another server via SFTP, or other supported backends.\nCreate Store with Path # $ plakar at /var/backups create Plakar prompts for an encryption passphrase. To avoid the prompt, set:\n$ export PLAKAR_PASSPHRASE=\u0026#34;my-secret-passphrase\u0026#34; $ plakar at /var/backups create Create Store with Alias # Configure store once, reference by alias in all commands:\n$ plakar store add mybackups /var/backups passphrase=xxx Use the configured store:\n$ plakar at @mybackups create $ plakar at @mybackups ls Update store configuration # $ plakar store set mybackups passphrase=yyy Passphrase Changes Updating the passphrase only affects the configuration. Existing data created with the old passphrase requires the original passphrase to access.\nDefault Store Location # Without specifying a path, plakar create uses ~/.plakar:\n$ plakar create When to Use Aliases # Use aliases for:\nStores requiring credentials (S3, cloud storage) Multiple stores with different configurations Avoiding repetitive path specifications ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/main/guides/create-kloset-repository/","section":"Docs","summary":"Create a Kloset Store on the filesystem using Plakar.","title":"Creating a Kloset Store","type":"docs"},{"content":" Creating a Kloset Store # A Kloset store is Plakar\u0026rsquo;s immutable storage backend for backup data. This guide covers filesystem-based store creation. You can learn more in the Kloset deep dive article\nWhy you need a Kloset store # Before you can run any backup, you\u0026rsquo;ll need to create a Kloset store to store the data. It can be hosted anywhere that Plakar has an integration with a storage connector for e.g a local filesystem path, a remote S3 bucket, another server via SFTP, or other supported backends.\nCreate Store with Path # $ plakar at /var/backups create Plakar prompts for an encryption passphrase. To avoid the prompt, set:\n$ export PLAKAR_PASSPHRASE=\u0026#34;my-secret-passphrase\u0026#34; $ plakar at /var/backups create Create Store with Alias # Configure store once, reference by alias in all commands:\n$ plakar store add mybackups /var/backups passphrase=xxx Use the configured store:\n$ plakar at @mybackups create $ plakar at @mybackups ls Update store configuration # $ plakar store set mybackups passphrase=yyy Passphrase Changes Updating the passphrase only affects the configuration. Existing data created with the old passphrase requires the original passphrase to access.\nDefault Store Location # Without specifying a path, plakar create uses ~/.plakar:\n$ plakar create When to Use Aliases # Use aliases for:\nStores requiring credentials (S3, cloud storage) Multiple stores with different configurations Avoiding repetitive path specifications ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/guides/create-kloset-repository/","section":"Docs","summary":"Create a Kloset Store on the filesystem using Plakar.","title":"Creating a Kloset Store","type":"docs"},{"content":" Creating a Kloset Store # A Kloset store is Plakar\u0026rsquo;s immutable storage backend for backup data. This guide covers filesystem-based store creation. You can learn more in the Kloset deep dive article\nWhy you need a Kloset store # Before you can run any backup, you\u0026rsquo;ll need to create a Kloset store to store the data. It can be hosted anywhere that Plakar has an integration with a storage connector for e.g a local filesystem path, a remote S3 bucket, another server via SFTP, or other supported backends.\nCreate Store with Path # $ plakar at /var/backups create Plakar prompts for an encryption passphrase. To avoid the prompt, set:\n$ export PLAKAR_PASSPHRASE=\u0026#34;my-secret-passphrase\u0026#34; $ plakar at /var/backups create Create Store with Alias # Configure store once, reference by alias in all commands:\n$ plakar store add mybackups /var/backups passphrase=xxx Use the configured store:\n$ plakar at @mybackups create $ plakar at @mybackups ls Update store configuration # $ plakar store set mybackups passphrase=yyy Passphrase Changes Updating the passphrase only affects the configuration. Existing data created with the old passphrase requires the original passphrase to access.\nDefault Store Location # Without specifying a path, plakar create uses ~/.plakar:\n$ plakar create When to Use Aliases # Use aliases for:\nStores requiring credentials (S3, cloud storage) Multiple stores with different configurations Avoiding repetitive path specifications ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/create-kloset-repository/","section":"Docs","summary":"Create a Kloset Store on the filesystem using Plakar.","title":"Creating a Kloset Store","type":"docs"},{"content":" Creating a Kloset Store # A Kloset store is Plakar\u0026rsquo;s immutable storage backend for backup data. This guide covers filesystem-based store creation. You can learn more in the Kloset deep dive article\nWhy you need a Kloset store # Before you can run any backup, you\u0026rsquo;ll need to create a Kloset store to store the data. It can be hosted anywhere that Plakar has an integration with a storage connector for e.g a local filesystem path, a remote S3 bucket, another server via SFTP, or other supported backends.\nCreate Store with Path # $ plakar at /var/backups create Plakar prompts for an encryption passphrase. To avoid the prompt, set:\n$ export PLAKAR_PASSPHRASE=\u0026#34;my-secret-passphrase\u0026#34; $ plakar at /var/backups create Create Store with Alias # Configure store once, reference by alias in all commands:\n$ plakar store add mybackups /var/backups passphrase=xxx Use the configured store:\n$ plakar at @mybackups create $ plakar at @mybackups ls Update store configuration # $ plakar store set mybackups passphrase=yyy Passphrase Changes Updating the passphrase only affects the configuration. Existing data created with the old passphrase requires the original passphrase to access.\nDefault Store Location # Without specifying a path, plakar create uses ~/.plakar:\n$ plakar create When to Use Aliases # Use aliases for:\nStores requiring credentials (S3, cloud storage) Multiple stores with different configurations Avoiding repetitive path specifications ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/create-kloset-repository/","section":"Docs","summary":"Create a Kloset Store on the filesystem using Plakar.","title":"Creating a Kloset Store","type":"docs"},{"content":" Plakar: v1.0.6 # Getting Started Overview Installation Quickstart Synchronize multiple copies Backup non-filesystem data Guides Scheduling Tasks Importing Configurations Creating a Kloset Store Serving a Kloset Store over HTTP Excluding files from a backup Retrieving secrets via external command Logging In to Plakar Managing packages Pruning snapshots MySQL PostgreSQL OVHcloud Exoscale Integrations S3 SFTP / SSH Notion Dropbox iCloud Drive Koofr Google Drive OneDrive OpenDrive Proton Drive Explanations How Plakar Works Should you push or pull backups How many Kloset Stores should you create Why multiple backup copies matter Why you need to backup your SaaS How Maintenance Works References Plakar Ptar Command line syntax Commands Community ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/","section":"Docs","summary":"Plakar documentation hub, find guides, references, and resources for working with Plakar.","title":"Plakar: v1.0.6","type":"docs"},{"content":" Quickstart # This guide gets you started in minutes. You\u0026rsquo;ll create your first backup snapshot, verify it\u0026rsquo;s secure, and learn how to restore your files.\nPlakar makes backups simple and secure by default. Every backup is end-to-end encrypted, deduplicated to save space, and stored as an independent snapshot you can restore at any time without depending on previous backups.\nIf you\u0026rsquo;ve used traditional backup tools, here\u0026rsquo;s what\u0026rsquo;s different: instead of incremental archives that chain together, Plakar creates self-contained snapshots. You can delete old snapshots without breaking newer ones, compare any two snapshots directly, and trust that your data is tamper-evident and encrypted before it ever leaves your machine.\nRequirements # Make sure Plakar is installed on your system. If you haven\u0026rsquo;t done this yet, please refer to the installation guide for detailed instructions.\nCreate a Kloset Store # Before we can back up any data, we need to define where the backup will go. In plakar terms, this storage location is called a Kloset Store. This is where Plakar keeps your backups. Think of it like a safe folder for snapshots. You can find out more about the concept and rationale behind Kloset in this post on our blog.\nFor our first backup, we will create a local Kloset Store on the filesystem of the host OS. In a real backup scenario you would want to store backups on a different physical device, so substitute in a better location if you have one.\nIn your terminal, run the following command:\n$ plakar at $HOME/backups create Don\u0026rsquo;t Lose or Forget your Passphrase Be extra careful when choosing the passphrase. People with access to the Kloset Store and knowledge of the passphrase can read your backups.\nBy default Plakar will enforce rules on your choice of passphrase to make sure it is complex enough to be secure. To add complexity, use a mixture of upper and lower case characters, numbers and symbols.\nYour passphrase is not stored anywhere and cannot be recovered in case of loss. A lost passphrase means the data within the repository can no longer be accessed or recovered.\nCreate your first backup # Now that we have created the Kloset Store where data will be stored, we can use it to create our first backup. Plakar uses the at keyword to specify the Kloset Store to use.\nTo create a simple example backup, try running:\n$ plakar at $HOME/backups backup $HOME/Documents This backs up your Documents folder into the $HOME/backups. Replace the paths with any folder or storage location you prefer to do plakar operations on.\nPlakar will process the files it finds at that location (in this case the Documents folder) and pass them to the Kloset where they will be chunked and encrypted.\nThe output will indicate the progress:\ndd62691d: OK ✓ /home/user/Documents/Obsidian/NOTES.md dd62691d: OK ✓ /home/user/Documents/budget.xlsx dd62691d: OK ✓ /home/user/Documents/notes.txt [...] dd62691d: OK ✓ /home/user/Documents dd62691d: OK ✓ /home/user dd62691d: OK ✓ /home dd62691d: OK ✓ / info: backup: created unsigned snapshot dd62691d of size 6.4 KiB in 125.317267ms (wrote 577 KiB) The output lists the short form of the snapshot ID. This is used to identify a particular snapshot and is also how you identify the snapshot to use for various Plakar commands.\nThe help command Learning new tools can be confusing. To make things easier, Plakar includes built-in help for all commands. Just use plakar help and then the command you need help with for a full list of options and examples. For example, if you forget what the options are for restoring files from a snapshot: plakar help restore\nList snapshots # You can verify that the backup exists with the ls command, which returns the backups in that Kloset Store:\n$ plakar at $HOME/backups ls 2026-01-14T06:45:32Z dd62691d 6.4 KiB 0s /home/user/Documents The output lists the date of the last backup, the short UUID, the size of files backed-up, the time it took to create the backup and the source path of the backup.\nVerify integrity # It\u0026rsquo;s always a good idea to verify the integrity of your backups. You can do this with the check command. This will read back the data from the Kloset Store, decrypt it and verify its integrity by recomputing checksums.\n$ plakar at $HOME/backups check dd62691d info: dd62691d: ✓ /home/user/Documents info: dd62691d: ✓ /home/user/Documents/Obsidian info: dd62691d: ✓ /home/user/Documents/code_samples [...] info: dd62691d: ✓ /home/user/Documents/Obsidian/NOTES.md info: dd62691d: ✓ /home/user/Documents/recipes/ingredients.csv info: dd62691d: ✓ /home/user/Documents/resume.pdf info: dd62691d: ✓ /home/user/Documents/project_proposal.docx info: check: verification of dd62691d:/home/user/Documents completed successfully In production, you would typically run this command periodically to ensure the integrity of your backups over time. This is necessary to ensure that data has not degraded or become corrupted while stored.\nRestore files from a backup # You can restore files from a backup using the restore command. In this case, we are restoring the snapshot we just created to another directory called restored.\n$ plakar at $HOME/backups restore -to $HOME/restored dd62691d info: dd62691d: OK ✓ /home/user/Documents info: dd62691d: OK ✓ /home/user/Documents/Obsidian info: dd62691d: OK ✓ /home/user/Documents/budget.xlsx [...] info: dd62691d: OK ✓ /home/user/Documents/recipes/desserts.txt info: dd62691d: OK ✓ /home/user/Documents/recipes/dinner.txt info: dd62691d: OK ✓ /home/user/Documents/resume.pdf info: dd62691d: OK ✓ /home/user/Documents/recipes/ingredients.csv info: restore: restoration of dd62691d:/home/user/Documents at /home/user/restored completed successfully To verify the files have been re-created, list the directory they were restored to. Note that the properties of the restored files, such as timestamps and permissions, will match the original files:\n$ ls -l $HOME/restored/Documents/ total 36 -rw-r--r-- 1 user user 30 Jan 14 06:31 budget.xlsx drwxr-xr-x 2 user user 4096 Jan 14 06:31 code_samples -rw-r--r-- 1 user user 28 Jan 14 06:31 notes.txt [...] -rw-r--r-- 1 user user 36 Jan 14 06:31 presentation.pptx -rw-r--r-- 1 user user 40 Jan 14 06:31 project_proposal.docx drwxr-xr-x 2 user user 4096 Jan 14 06:31 recipes -rw-r--r-- 1 user user 29 Jan 14 06:31 resume.pdf Access the UI # Plakar provides a web interface to view the backups and their content. To start the web interface, run:\n$ plakar at $HOME/backups ui Your default browser will open a new tab. You can navigate through the snapshots, search and view the files, and download them.\nA public instance of the web UI is also available at https://demo.plakar.io. You can use it to explore the features of the UI on real backups without installing anything.\nCongratulations! # You have successfully:\ncreated a backup verified it restored files used the graphical UI How long did it take? This is how easy Plakar is for simple, secure backups.\nNext steps # Having a backup on the filesystem is a start, but to improve the durability of your backups, you should consider hosting multiple copies in different locations.\nContinue to the Part 2 of the Quickstart to create multiple copies of your backups.\n","date":"11 March 2026","externalUrl":null,"permalink":"/docs/main/quickstart/first-backup/","section":"Docs","summary":"Get started with plakar: create your first backup, verify integrity, restore, and use the UI.","title":"Quickstart","type":"docs"},{"content":" Quickstart # This guide gets you started in minutes. You\u0026rsquo;ll create your first backup snapshot, verify it\u0026rsquo;s secure, and learn how to restore your files.\nPlakar makes backups simple and secure by default. Every backup is end-to-end encrypted, deduplicated to save space, and stored as an independent snapshot you can restore at any time without depending on previous backups.\nIf you\u0026rsquo;ve used traditional backup tools, here\u0026rsquo;s what\u0026rsquo;s different: instead of incremental archives that chain together, Plakar creates self-contained snapshots. You can delete old snapshots without breaking newer ones, compare any two snapshots directly, and trust that your data is tamper-evident and encrypted before it ever leaves your machine.\nRequirements # Make sure Plakar is installed on your system. If you haven\u0026rsquo;t done this yet, please refer to the installation guide for detailed instructions.\nCreate a Kloset Store # Before we can back up any data, we need to define where the backup will go. In plakar terms, this storage location is called a Kloset Store. This is where Plakar keeps your backups. Think of it like a safe folder for snapshots. You can find out more about the concept and rationale behind Kloset in this post on our blog.\nFor our first backup, we will create a local Kloset Store on the filesystem of the host OS. In a real backup scenario you would want to store backups on a different physical device, so substitute in a better location if you have one.\nIn your terminal, run the following command:\n$ plakar at $HOME/backups create Don\u0026rsquo;t Lose or Forget your Passphrase Be extra careful when choosing the passphrase. People with access to the Kloset Store and knowledge of the passphrase can read your backups.\nBy default Plakar will enforce rules on your choice of passphrase to make sure it is complex enough to be secure. To add complexity, use a mixture of upper and lower case characters, numbers and symbols.\nYour passphrase is not stored anywhere and cannot be recovered in case of loss. A lost passphrase means the data within the repository can no longer be accessed or recovered.\nCreate your first backup # Now that we have created the Kloset Store where data will be stored, we can use it to create our first backup. Plakar uses the at keyword to specify the Kloset Store to use.\nTo create a simple example backup, try running:\n$ plakar at $HOME/backups backup $HOME/Documents This backs up your Documents folder into the $HOME/backups. Replace the paths with any folder or storage location you prefer to do plakar operations on.\nPlakar will process the files it finds at that location (in this case the Documents folder) and pass them to the Kloset where they will be chunked and encrypted.\nThe output will indicate the progress:\ndd62691d: OK ✓ /home/user/Documents/Obsidian/NOTES.md dd62691d: OK ✓ /home/user/Documents/budget.xlsx dd62691d: OK ✓ /home/user/Documents/notes.txt [...] dd62691d: OK ✓ /home/user/Documents dd62691d: OK ✓ /home/user dd62691d: OK ✓ /home dd62691d: OK ✓ / info: backup: created unsigned snapshot dd62691d of size 6.4 KiB in 125.317267ms (wrote 577 KiB) The output lists the short form of the snapshot ID. This is used to identify a particular snapshot and is also how you identify the snapshot to use for various Plakar commands.\nThe help command Learning new tools can be confusing. To make things easier, Plakar includes built-in help for all commands. Just use plakar help and then the command you need help with for a full list of options and examples. For example, if you forget what the options are for restoring files from a snapshot: plakar help restore\nList snapshots # You can verify that the backup exists with the ls command, which returns the backups in that Kloset Store:\n$ plakar at $HOME/backups ls 2026-01-14T06:45:32Z dd62691d 6.4 KiB 0s /home/user/Documents The output lists the date of the last backup, the short UUID, the size of files backed-up, the time it took to create the backup and the source path of the backup.\nVerify integrity # It\u0026rsquo;s always a good idea to verify the integrity of your backups. You can do this with the check command. This will read back the data from the Kloset Store, decrypt it and verify its integrity by recomputing checksums.\n$ plakar at $HOME/backups check dd62691d info: dd62691d: ✓ /home/user/Documents info: dd62691d: ✓ /home/user/Documents/Obsidian info: dd62691d: ✓ /home/user/Documents/code_samples [...] info: dd62691d: ✓ /home/user/Documents/Obsidian/NOTES.md info: dd62691d: ✓ /home/user/Documents/recipes/ingredients.csv info: dd62691d: ✓ /home/user/Documents/resume.pdf info: dd62691d: ✓ /home/user/Documents/project_proposal.docx info: check: verification of dd62691d:/home/user/Documents completed successfully In production, you would typically run this command periodically to ensure the integrity of your backups over time. This is necessary to ensure that data has not degraded or become corrupted while stored.\nRestore files from a backup # You can restore files from a backup using the restore command. In this case, we are restoring the snapshot we just created to another directory called restored.\n$ plakar at $HOME/backups restore -to $HOME/restored dd62691d info: dd62691d: OK ✓ /home/user/Documents info: dd62691d: OK ✓ /home/user/Documents/Obsidian info: dd62691d: OK ✓ /home/user/Documents/budget.xlsx [...] info: dd62691d: OK ✓ /home/user/Documents/recipes/desserts.txt info: dd62691d: OK ✓ /home/user/Documents/recipes/dinner.txt info: dd62691d: OK ✓ /home/user/Documents/resume.pdf info: dd62691d: OK ✓ /home/user/Documents/recipes/ingredients.csv info: restore: restoration of dd62691d:/home/user/Documents at /home/user/restored completed successfully To verify the files have been re-created, list the directory they were restored to. Note that the properties of the restored files, such as timestamps and permissions, will match the original files:\n$ ls -l $HOME/restored/Documents/ total 36 -rw-r--r-- 1 user user 30 Jan 14 06:31 budget.xlsx drwxr-xr-x 2 user user 4096 Jan 14 06:31 code_samples -rw-r--r-- 1 user user 28 Jan 14 06:31 notes.txt [...] -rw-r--r-- 1 user user 36 Jan 14 06:31 presentation.pptx -rw-r--r-- 1 user user 40 Jan 14 06:31 project_proposal.docx drwxr-xr-x 2 user user 4096 Jan 14 06:31 recipes -rw-r--r-- 1 user user 29 Jan 14 06:31 resume.pdf Access the UI # Plakar provides a web interface to view the backups and their content. To start the web interface, run:\n$ plakar at $HOME/backups ui Your default browser will open a new tab. You can navigate through the snapshots, search and view the files, and download them.\nA public instance of the web UI is also available at https://demo.plakar.io. You can use it to explore the features of the UI on real backups without installing anything.\nCongratulations! # You have successfully:\ncreated a backup verified it restored files used the graphical UI How long did it take? This is how easy Plakar is for simple, secure backups.\nNext steps # Having a backup on the filesystem is a start, but to improve the durability of your backups, you should consider hosting multiple copies in different locations.\nContinue to the Part 2 of the Quickstart to create multiple copies of your backups.\n","date":"11 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/quickstart/first-backup/","section":"Docs","summary":"Get started with plakar: create your first backup, verify integrity, restore, and use the UI.","title":"Quickstart","type":"docs"},{"content":" Quickstart # This guide gets you started in minutes. You\u0026rsquo;ll create your first backup snapshot, verify it\u0026rsquo;s secure, and learn how to restore your files.\nPlakar makes backups simple and secure by default. Every backup is end-to-end encrypted, deduplicated to save space, and stored as an independent snapshot you can restore at any time without depending on previous backups.\nIf you\u0026rsquo;ve used traditional backup tools, here\u0026rsquo;s what\u0026rsquo;s different: instead of incremental archives that chain together, Plakar creates self-contained snapshots. You can delete old snapshots without breaking newer ones, compare any two snapshots directly, and trust that your data is tamper-evident and encrypted before it ever leaves your machine.\nRequirements # Make sure Plakar is installed on your system. If you haven\u0026rsquo;t done this yet, please refer to the installation guide for detailed instructions.\nCreate a Kloset Store # Before we can back up any data, we need to define where the backup will go. In plakar terms, this storage location is called a Kloset Store. This is where Plakar keeps your backups. Think of it like a safe folder for snapshots. You can find out more about the concept and rationale behind Kloset in this post on our blog.\nFor our first backup, we will create a local Kloset Store on the filesystem of the host OS. In a real backup scenario you would want to store backups on a different physical device, so substitute in a better location if you have one.\nIn your terminal, run the following command:\n$ plakar at $HOME/backups create Don\u0026rsquo;t Lose or Forget your Passphrase Be extra careful when choosing the passphrase. People with access to the Kloset Store and knowledge of the passphrase can read your backups.\nBy default Plakar will enforce rules on your choice of passphrase to make sure it is complex enough to be secure. To add complexity, use a mixture of upper and lower case characters, numbers and symbols.\nYour passphrase is not stored anywhere and cannot be recovered in case of loss. A lost passphrase means the data within the repository can no longer be accessed or recovered.\nCreate your first backup # Now that we have created the Kloset Store where data will be stored, we can use it to create our first backup. Plakar uses the at keyword to specify the Kloset Store to use.\nTo create a simple example backup, try running:\n$ plakar at $HOME/backups backup $HOME/Documents This backs up your Documents folder into the $HOME/backups. Replace the paths with any folder or storage location you prefer to do plakar operations on.\nPlakar will process the files it finds at that location (in this case the Documents folder) and pass them to the Kloset where they will be chunked and encrypted.\nThe output will indicate the progress:\ndd62691d: OK ✓ /home/user/Documents/Obsidian/NOTES.md dd62691d: OK ✓ /home/user/Documents/budget.xlsx dd62691d: OK ✓ /home/user/Documents/notes.txt [...] dd62691d: OK ✓ /home/user/Documents dd62691d: OK ✓ /home/user dd62691d: OK ✓ /home dd62691d: OK ✓ / info: backup: created unsigned snapshot dd62691d of size 6.4 KiB in 125.317267ms (wrote 577 KiB) The output lists the short form of the snapshot ID. This is used to identify a particular snapshot and is also how you identify the snapshot to use for various Plakar commands.\nThe help command Learning new tools can be confusing. To make things easier, Plakar includes built-in help for all commands. Just use plakar help and then the command you need help with for a full list of options and examples. For example, if you forget what the options are for restoring files from a snapshot: plakar help restore\nList snapshots # You can verify that the backup exists with the ls command, which returns the backups in that Kloset Store:\n$ plakar at $HOME/backups ls 2026-01-14T06:45:32Z dd62691d 6.4 KiB 0s /home/user/Documents The output lists the date of the last backup, the short UUID, the size of files backed-up, the time it took to create the backup and the source path of the backup.\nVerify integrity # It\u0026rsquo;s always a good idea to verify the integrity of your backups. You can do this with the check command. This will read back the data from the Kloset Store, decrypt it and verify its integrity by recomputing checksums.\n$ plakar at $HOME/backups check dd62691d info: dd62691d: ✓ /home/user/Documents info: dd62691d: ✓ /home/user/Documents/Obsidian info: dd62691d: ✓ /home/user/Documents/code_samples [...] info: dd62691d: ✓ /home/user/Documents/Obsidian/NOTES.md info: dd62691d: ✓ /home/user/Documents/recipes/ingredients.csv info: dd62691d: ✓ /home/user/Documents/resume.pdf info: dd62691d: ✓ /home/user/Documents/project_proposal.docx info: check: verification of dd62691d:/home/user/Documents completed successfully In production, you would typically run this command periodically to ensure the integrity of your backups over time. This is necessary to ensure that data has not degraded or become corrupted while stored.\nRestore files from a backup # You can restore files from a backup using the restore command. In this case, we are restoring the snapshot we just created to another directory called restored.\n$ plakar at $HOME/backups restore -to $HOME/restored dd62691d info: dd62691d: OK ✓ /home/user/Documents info: dd62691d: OK ✓ /home/user/Documents/Obsidian info: dd62691d: OK ✓ /home/user/Documents/budget.xlsx [...] info: dd62691d: OK ✓ /home/user/Documents/recipes/desserts.txt info: dd62691d: OK ✓ /home/user/Documents/recipes/dinner.txt info: dd62691d: OK ✓ /home/user/Documents/resume.pdf info: dd62691d: OK ✓ /home/user/Documents/recipes/ingredients.csv info: restore: restoration of dd62691d:/home/user/Documents at /home/user/restored completed successfully To verify the files have been re-created, list the directory they were restored to. Note that the properties of the restored files, such as timestamps and permissions, will match the original files:\n$ ls -l $HOME/restored/Documents/ total 36 -rw-r--r-- 1 user user 30 Jan 14 06:31 budget.xlsx drwxr-xr-x 2 user user 4096 Jan 14 06:31 code_samples -rw-r--r-- 1 user user 28 Jan 14 06:31 notes.txt [...] -rw-r--r-- 1 user user 36 Jan 14 06:31 presentation.pptx -rw-r--r-- 1 user user 40 Jan 14 06:31 project_proposal.docx drwxr-xr-x 2 user user 4096 Jan 14 06:31 recipes -rw-r--r-- 1 user user 29 Jan 14 06:31 resume.pdf Access the UI # Plakar provides a web interface to view the backups and their content. To start the web interface, run:\n$ plakar at $HOME/backups ui Your default browser will open a new tab. You can navigate through the snapshots, search and view the files, and download them.\nA public instance of the web UI is also available at https://demo.plakar.io. You can use it to explore the features of the UI on real backups without installing anything.\nCongratulations! # You have successfully:\ncreated a backup verified it restored files used the graphical UI How long did it take? This is how easy Plakar is for simple, secure backups.\nNext steps # Having a backup on the filesystem is a start, but to improve the durability of your backups, you should consider hosting multiple copies in different locations.\nContinue to the Part 2 of the Quickstart to create multiple copies of your backups.\n","date":"11 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/quickstart/first-backup/","section":"Docs","summary":"Get started with plakar: create your first backup, verify integrity, restore, and use the UI.","title":"Quickstart","type":"docs"},{"content":" Installation # Plakar Control Plane is delivered as a virtual appliance, a pre-built machine image you deploy on your own infrastructure. This means your data never leaves your environment, and you retain full control over your backup system.\nOnce the appliance is running, the first thing you do is enroll your instance. Enrollment registers the appliance with plakar.io to automatically retrieve your license and set up billing reporting. No backup data is ever transferred during this process, only the consumption metrics needed for billing.\nIf you operate in an air-gapped or PCI-DSS environment and need a full offline mode, contact us.\nNext steps # Select your provider to continue with the installation:\nInstallation on AWS Installation on OVHcloud Installation on Scaleway Once installed, see Enrollment to activate your instance.\n","date":"20 April 2026","externalUrl":null,"permalink":"/control-plane-docs/intro/installation/","section":"Control Plane Docs","summary":"How to deploy Plakar Control Plane as a virtual appliance on your infrastructure.","title":"Installation","type":"control-plane-docs"},{"content":" Dropbox # The Dropbox integration package for Plakar allows you to back up and restore data to and from Dropbox cloud storage, as well as host Kloset stores directly within Dropbox. It is built on top of Rclone, a command-line program to manage files on cloud storage, and supports Dropbox as one of its many backends.\nThe integration provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Dropbox remote must be configured. Typical use cases\nCold backup of Dropbox folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Dropbox, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/. Then, run the following command to configure Rclone with Dropbox:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Dropbox.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Dropbox\u0026rdquo; from the list of supported storage providers. Leave client_id and client_secret empty to use Rclone\u0026rsquo;s defaults, or provide your own if you have them. Choose to open the browser for authentication. Confirm the settings. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Dropbox files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Dropbox.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Dropbox via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nSee also # Rclone documentation for Dropbox ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/main/integrations/dropbox/","section":"Docs","summary":"Back up and restore your Dropbox with Plakar, and host Kloset stores in Dropbox.","title":"Dropbox","type":"docs"},{"content":" Dropbox # The Dropbox integration package for Plakar allows you to back up and restore data to and from Dropbox cloud storage, as well as host Kloset stores directly within Dropbox. It is built on top of Rclone, a command-line program to manage files on cloud storage, and supports Dropbox as one of its many backends.\nThe integration provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Dropbox remote must be configured. Typical use cases\nCold backup of Dropbox folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Dropbox, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/. Then, run the following command to configure Rclone with Dropbox:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Dropbox.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Dropbox\u0026rdquo; from the list of supported storage providers. Leave client_id and client_secret empty to use Rclone\u0026rsquo;s defaults, or provide your own if you have them. Choose to open the browser for authentication. Confirm the settings. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Dropbox files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Dropbox.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Dropbox via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nSee also # Rclone documentation for Dropbox ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/integrations/dropbox/","section":"Docs","summary":"Back up and restore your Dropbox with Plakar, and host Kloset stores in Dropbox.","title":"Dropbox","type":"docs"},{"content":" Dropbox # The Dropbox integration package for Plakar allows you to back up and restore data to and from Dropbox cloud storage, as well as host Kloset stores directly within Dropbox. It is built on top of Rclone, a command-line program to manage files on cloud storage, and supports Dropbox as one of its many backends.\nThe integration provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Dropbox remote must be configured. Typical use cases\nCold backup of Dropbox folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Dropbox, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/. Then, run the following command to configure Rclone with Dropbox:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Dropbox.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Dropbox\u0026rdquo; from the list of supported storage providers. Leave client_id and client_secret empty to use Rclone\u0026rsquo;s defaults, or provide your own if you have them. Choose to open the browser for authentication. Confirm the settings. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Dropbox files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Dropbox.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Dropbox via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nSee also # Rclone documentation for Dropbox ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/integrations/dropbox/","section":"Docs","summary":"Back up and restore your Dropbox with Plakar, and host Kloset stores in Dropbox.","title":"Dropbox","type":"docs"},{"content":" Dropbox # The Dropbox integration package for Plakar allows you to back up and restore data to and from Dropbox cloud storage, as well as host Kloset stores directly within Dropbox. It is built on top of Rclone, a command-line program to manage files on cloud storage, and supports Dropbox as one of its many backends.\nThe integration provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Dropbox remote must be configured. Typical use cases\nCold backup of Dropbox folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Dropbox, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/. Then, run the following command to configure Rclone with Dropbox:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Dropbox.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Dropbox\u0026rdquo; from the list of supported storage providers. Leave client_id and client_secret empty to use Rclone\u0026rsquo;s defaults, or provide your own if you have them. Choose to open the browser for authentication. Confirm the settings. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Dropbox files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Dropbox.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Dropbox via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nSee also # Rclone documentation for Dropbox ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/integrations/dropbox/","section":"Docs","summary":"Back up and restore your Dropbox with Plakar, and host Kloset stores in Dropbox.","title":"Dropbox","type":"docs"},{"content":" Explanations # This section provides background and context to help you understand how Plakar works and why certain design choices exist. These pages focus on concepts, trade‑offs, and best practices rather than step‑by‑step instructions.\nIf you’re looking for practical instructions, see the Guides section.\nHow Plakar Works Understand the core architecture and data processing pipeline behind Plakar, including Kloset stores, chunking, deduplication, compression, encryption, and snapshot management\nShould you push or pull backups Understand the difference between push and pull backup models, and how Plakar supports both.\nHow many Kloset Stores should you create Understand how deduplication, data similarity, and security requirements affect the number of Kloset Stores you should use.\nWhy multiple backup copies matter Understand why multiple backup copies drastically reduce the risk of data loss, and how this leads to the 3‑2‑1 backup strategy.\nWhy you need to backup your SaaS Understand why cloud services do not replace backups, and why SaaS data requires independent protection.\nHow Maintenance Works Understand how Plakar stores backup data in chunks and packfiles, why deleting a snapshot does not immediately free space, and how the maintenance process safely reclaims unused storage.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/explanations/","section":"Docs","summary":"","title":"Explanations","type":"docs"},{"content":" Explanations # This section provides background and context to help you understand how Plakar works and why certain design choices exist. These pages focus on concepts, trade‑offs, and best practices rather than step‑by‑step instructions.\nIf you’re looking for practical instructions, see the Guides section.\nHow Plakar Works Understand the core architecture and data processing pipeline behind Plakar, including Kloset stores, chunking, deduplication, compression, encryption, and snapshot management\nShould you push or pull backups Understand the difference between push and pull backup models, and how Plakar supports both.\nHow many Kloset Stores should you create Understand how deduplication, data similarity, and security requirements affect the number of Kloset Stores you should use.\nWhy multiple backup copies matter Understand why multiple backup copies drastically reduce the risk of data loss, and how this leads to the 3‑2‑1 backup strategy.\nWhy you need to backup your SaaS Understand why cloud services do not replace backups, and why SaaS data requires independent protection.\nHow Maintenance Works Understand how Plakar stores backup data in chunks and packfiles, why deleting a snapshot does not immediately free space, and how the maintenance process safely reclaims unused storage.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/explanations/","section":"Docs","summary":"","title":"Explanations","type":"docs"},{"content":" Explanations # This section provides background and context to help you understand how Plakar works and why certain design choices exist. These pages focus on concepts, trade‑offs, and best practices rather than step‑by‑step instructions.\nIf you’re looking for practical instructions, see the Guides section.\nHow Plakar Works Understand the core architecture and data processing pipeline behind Plakar, including Kloset stores, chunking, deduplication, compression, encryption, and snapshot management\nShould you push or pull backups Understand the difference between push and pull backup models, and how Plakar supports both.\nHow many Kloset Stores should you create Understand how deduplication, data similarity, and security requirements affect the number of Kloset Stores you should use.\nWhy multiple backup copies matter Understand why multiple backup copies drastically reduce the risk of data loss, and how this leads to the 3‑2‑1 backup strategy.\nWhy you need to backup your SaaS Understand why cloud services do not replace backups, and why SaaS data requires independent protection.\nHow Maintenance Works Understand how Plakar stores backup data in chunks and packfiles, why deleting a snapshot does not immediately free space, and how the maintenance process safely reclaims unused storage.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/explanations/","section":"Docs","summary":"","title":"Explanations","type":"docs"},{"content":" Explanations # This section provides background and context to help you understand how Plakar works and why certain design choices exist. These pages focus on concepts, trade‑offs, and best practices rather than step‑by‑step instructions.\nIf you’re looking for practical instructions, see the Guides section.\nHow Plakar Works Understand the core architecture and data processing pipeline behind Plakar, including Kloset stores, chunking, deduplication, compression, encryption, and snapshot management\nShould you push or pull backups Understand the difference between push and pull backup models, and how Plakar supports both.\nHow many Kloset Stores should you create Understand how deduplication, data similarity, and security requirements affect the number of Kloset Stores you should use.\nWhy multiple backup copies matter Understand why multiple backup copies drastically reduce the risk of data loss, and how this leads to the 3‑2‑1 backup strategy.\nWhy you need to backup your SaaS Understand why cloud services do not replace backups, and why SaaS data requires independent protection.\nHow Maintenance Works Understand how Plakar stores backup data in chunks and packfiles, why deleting a snapshot does not immediately free space, and how the maintenance process safely reclaims unused storage.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/explanations/","section":"Docs","summary":"","title":"Explanations","type":"docs"},{"content":" Why multiple backup copies matter # Keeping multiple copies of your backups is one of the most important principles in data protection. The goal is simple: reduce the chance that a single failure can destroy all copies of your data at once.\nThis page explains why multiple copies matter and how many you should reasonably aim for.\nWhy one copy is not enough # If all your backups exist in a single location, any failure affecting that location can result in total data loss.\nThis includes:\nHardware failures (disk crashes, controller failures) Power or electrical issues Accidental deletion or misconfiguration Theft or physical damage Natural disasters (fire, flood, earthquake) Malicious or intentional actions Even if each of these events is unlikely on its own, they happen often enough that relying on a single backup copy is risky. A good backup strategy assumes that failures will happen eventually.\nWhy multiple copies change the odds # Each backup copy stored in a different place acts as an independent safety net.\nData loss only occurs if all copies are lost at the same time. As long as at least one copy survives, recovery is possible.\nThis is why adding copies reduces risk so dramatically:\nWith one copy, a single failure is enough to lose everything. With two copies, data is lost only if both copies fail at the same time. With three copies, all three must fail simultaneously. In practice, the probability of independent failures overlapping is extremely low, especially when copies are stored in different locations.\nAn intuitive way to think about it # Studies of large storage systems typically show that the probability of losing data at a single site over a year is in the low single‑digit percentages.\nThat means:\nLosing one copy is not rare Losing two independent copies at the same time is very unlikely Losing three independent copies is exceptionally unlikely Each additional copy reduces risk by orders of magnitude, not by small increments.\nWhy locations matter # Multiple copies only help if failures are independent. Storing three backups on the same machine, in the same room, or in the same data center does not protect you from events that affect all of them at once.\nTo be effective, copies should be:\nStored on different hardware Located in different physical places Ideally managed under different failure domains The 3‑2‑1 backup strategy # These ideas are commonly summarized by the 3‑2‑1 backup rule:\n3 copies of your data (the live data plus two backups) 2 different storage types 1 off‑site copy This strategy places the risk of total data loss into the “extremely unlikely” range while remaining practical to operate.\nPlakar’s ability to synchronize Kloset Stores across locations makes it easy to apply these principles without changing how backups are created.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/explanations/why-several-copies/","section":"Docs","summary":"Understand why multiple backup copies drastically reduce the risk of data loss, and how this leads to the 3‑2‑1 backup strategy.","title":"Why multiple backup copies matter","type":"docs"},{"content":" Why multiple backup copies matter # Keeping multiple copies of your backups is one of the most important principles in data protection. The goal is simple: reduce the chance that a single failure can destroy all copies of your data at once.\nThis page explains why multiple copies matter and how many you should reasonably aim for.\nWhy one copy is not enough # If all your backups exist in a single location, any failure affecting that location can result in total data loss.\nThis includes:\nHardware failures (disk crashes, controller failures) Power or electrical issues Accidental deletion or misconfiguration Theft or physical damage Natural disasters (fire, flood, earthquake) Malicious or intentional actions Even if each of these events is unlikely on its own, they happen often enough that relying on a single backup copy is risky. A good backup strategy assumes that failures will happen eventually.\nWhy multiple copies change the odds # Each backup copy stored in a different place acts as an independent safety net.\nData loss only occurs if all copies are lost at the same time. As long as at least one copy survives, recovery is possible.\nThis is why adding copies reduces risk so dramatically:\nWith one copy, a single failure is enough to lose everything. With two copies, data is lost only if both copies fail at the same time. With three copies, all three must fail simultaneously. In practice, the probability of independent failures overlapping is extremely low, especially when copies are stored in different locations.\nAn intuitive way to think about it # Studies of large storage systems typically show that the probability of losing data at a single site over a year is in the low single‑digit percentages.\nThat means:\nLosing one copy is not rare Losing two independent copies at the same time is very unlikely Losing three independent copies is exceptionally unlikely Each additional copy reduces risk by orders of magnitude, not by small increments.\nWhy locations matter # Multiple copies only help if failures are independent. Storing three backups on the same machine, in the same room, or in the same data center does not protect you from events that affect all of them at once.\nTo be effective, copies should be:\nStored on different hardware Located in different physical places Ideally managed under different failure domains The 3‑2‑1 backup strategy # These ideas are commonly summarized by the 3‑2‑1 backup rule:\n3 copies of your data (the live data plus two backups) 2 different storage types 1 off‑site copy This strategy places the risk of total data loss into the “extremely unlikely” range while remaining practical to operate.\nPlakar’s ability to synchronize Kloset Stores across locations makes it easy to apply these principles without changing how backups are created.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/explanations/why-several-copies/","section":"Docs","summary":"Understand why multiple backup copies drastically reduce the risk of data loss, and how this leads to the 3‑2‑1 backup strategy.","title":"Why multiple backup copies matter","type":"docs"},{"content":" Why multiple backup copies matter # Keeping multiple copies of your backups is one of the most important principles in data protection. The goal is simple: reduce the chance that a single failure can destroy all copies of your data at once.\nThis page explains why multiple copies matter and how many you should reasonably aim for.\nWhy one copy is not enough # If all your backups exist in a single location, any failure affecting that location can result in total data loss.\nThis includes:\nHardware failures (disk crashes, controller failures) Power or electrical issues Accidental deletion or misconfiguration Theft or physical damage Natural disasters (fire, flood, earthquake) Malicious or intentional actions Even if each of these events is unlikely on its own, they happen often enough that relying on a single backup copy is risky. A good backup strategy assumes that failures will happen eventually.\nWhy multiple copies change the odds # Each backup copy stored in a different place acts as an independent safety net.\nData loss only occurs if all copies are lost at the same time. As long as at least one copy survives, recovery is possible.\nThis is why adding copies reduces risk so dramatically:\nWith one copy, a single failure is enough to lose everything. With two copies, data is lost only if both copies fail at the same time. With three copies, all three must fail simultaneously. In practice, the probability of independent failures overlapping is extremely low, especially when copies are stored in different locations.\nAn intuitive way to think about it # Studies of large storage systems typically show that the probability of losing data at a single site over a year is in the low single‑digit percentages.\nThat means:\nLosing one copy is not rare Losing two independent copies at the same time is very unlikely Losing three independent copies is exceptionally unlikely Each additional copy reduces risk by orders of magnitude, not by small increments.\nWhy locations matter # Multiple copies only help if failures are independent. Storing three backups on the same machine, in the same room, or in the same data center does not protect you from events that affect all of them at once.\nTo be effective, copies should be:\nStored on different hardware Located in different physical places Ideally managed under different failure domains The 3‑2‑1 backup strategy # These ideas are commonly summarized by the 3‑2‑1 backup rule:\n3 copies of your data (the live data plus two backups) 2 different storage types 1 off‑site copy This strategy places the risk of total data loss into the “extremely unlikely” range while remaining practical to operate.\nPlakar’s ability to synchronize Kloset Stores across locations makes it easy to apply these principles without changing how backups are created.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/explanations/why-several-copies/","section":"Docs","summary":"Understand why multiple backup copies drastically reduce the risk of data loss, and how this leads to the 3‑2‑1 backup strategy.","title":"Why multiple backup copies matter","type":"docs"},{"content":" Why multiple backup copies matter # Keeping multiple copies of your backups is one of the most important principles in data protection. The goal is simple: reduce the chance that a single failure can destroy all copies of your data at once.\nThis page explains why multiple copies matter and how many you should reasonably aim for.\nWhy one copy is not enough # If all your backups exist in a single location, any failure affecting that location can result in total data loss.\nThis includes:\nHardware failures (disk crashes, controller failures) Power or electrical issues Accidental deletion or misconfiguration Theft or physical damage Natural disasters (fire, flood, earthquake) Malicious or intentional actions Even if each of these events is unlikely on its own, they happen often enough that relying on a single backup copy is risky. A good backup strategy assumes that failures will happen eventually.\nWhy multiple copies change the odds # Each backup copy stored in a different place acts as an independent safety net.\nData loss only occurs if all copies are lost at the same time. As long as at least one copy survives, recovery is possible.\nThis is why adding copies reduces risk so dramatically:\nWith one copy, a single failure is enough to lose everything. With two copies, data is lost only if both copies fail at the same time. With three copies, all three must fail simultaneously. In practice, the probability of independent failures overlapping is extremely low, especially when copies are stored in different locations.\nAn intuitive way to think about it # Studies of large storage systems typically show that the probability of losing data at a single site over a year is in the low single‑digit percentages.\nThat means:\nLosing one copy is not rare Losing two independent copies at the same time is very unlikely Losing three independent copies is exceptionally unlikely Each additional copy reduces risk by orders of magnitude, not by small increments.\nWhy locations matter # Multiple copies only help if failures are independent. Storing three backups on the same machine, in the same room, or in the same data center does not protect you from events that affect all of them at once.\nTo be effective, copies should be:\nStored on different hardware Located in different physical places Ideally managed under different failure domains The 3‑2‑1 backup strategy # These ideas are commonly summarized by the 3‑2‑1 backup rule:\n3 copies of your data (the live data plus two backups) 2 different storage types 1 off‑site copy This strategy places the risk of total data loss into the “extremely unlikely” range while remaining practical to operate.\nPlakar’s ability to synchronize Kloset Stores across locations makes it easy to apply these principles without changing how backups are created.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/explanations/why-several-copies/","section":"Docs","summary":"Understand why multiple backup copies drastically reduce the risk of data loss, and how this leads to the 3‑2‑1 backup strategy.","title":"Why multiple backup copies matter","type":"docs"},{"content":" Plakar: v1.0.5 # Getting Started Quickstart Guides Scheduling Tasks Importing Configurations Creating a Kloset Store Serving a Kloset Store over HTTP Excluding files from a backup Retrieving secrets via external command Logging In to Plakar Managing packages Pruning snapshots Integrations S3 SFTP / SSH Notion Dropbox iCloud Drive Koofr Google Drive OneDrive OpenDrive Proton Drive Explanations How Plakar Works Should you push or pull backups How many Kloset Stores should you create Why multiple backup copies matter Why you need to backup your SaaS How Maintenance Works References Command line syntax Commands Community ","date":"18 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/","section":"Docs","summary":"Plakar documentation hub, find guides, references, and resources for working with Plakar.","title":"Plakar: v1.0.5","type":"docs"},{"content":" Serving a Kloset Store over HTTP # Plakar can expose a Kloset Store over HTTP using the plakar server command. This allows other machines to access the store remotely.\nBy default, a Kloset store is only accessed locally. Serving it over HTTP lets other machines back up to or restore from the same store without copying data around. This is useful when the store lives on a NAS, a dedicated backup server, or any machine you want to treat as a central backup target.\nThis guide shows how to start an HTTP server for a Kloset Store and access it from another Plakar client.\nStarting an HTTP server # Assume you have a Kloset Store located at /var/backups. You can interact with it locally using commands like:\n$ plakar at /var/backups ls By default, Plakar listens on http://localhost:9876. To expose this store over HTTP, start the server by running:\n$ plakar at /var/backups server You can now access the store via its HTTP address:\n$ plakar at http://localhost:9876 ls All standard read operations work exactly as they do with a local store.\nEnabling delete operations # For safety, delete operations are disabled by default when serving a store over HTTP. If you explicitly want to allow deletions, start the server with:\n$ plakar at /var/backups server -allow-delete Typical use cases # Serving a Kloset Store over HTTP is useful when:\nExposing a store hosted on a NAS to other machines Accessing a local store from a remote system Bridging environments without copying data Limitations # The server exposes only the encrypted store. Clients must provide the passphrase when accessing it. TLS is not supported natively. If encryption in transit is required, use a reverse proxy such as Nginx. ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/main/guides/serving-a-kloset-store-over-http/","section":"Docs","summary":"Expose a Kloset Store over HTTP using the plakar server command.","title":"Serving a Kloset Store over HTTP","type":"docs"},{"content":" Serving a Kloset Store over HTTP # Plakar can expose a Kloset Store over HTTP using the plakar server command. This allows other machines to access the store remotely.\nBy default, a Kloset store is only accessed locally. Serving it over HTTP lets other machines back up to or restore from the same store without copying data around. This is useful when the store lives on a NAS, a dedicated backup server, or any machine you want to treat as a central backup target.\nThis guide shows how to start an HTTP server for a Kloset Store and access it from another Plakar client.\nStarting an HTTP server # Assume you have a Kloset Store located at /var/backups. You can interact with it locally using commands like:\n$ plakar at /var/backups ls By default, Plakar listens on http://localhost:9876. To expose this store over HTTP, start the server by running:\n$ plakar at /var/backups server Accessing the store over HTTP # To use a Kloset Store over HTTP, install the HTTP integration:\n$ plakar pkg add http You can now access the store via its HTTP address:\n$ plakar at http://localhost:9876 ls All standard read operations work exactly as they do with a local store.\nEnabling delete operations # For safety, delete operations are disabled by default when serving a store over HTTP. If you explicitly want to allow deletions, start the server with:\n$ plakar at /var/backups server -allow-delete Typical use cases # Serving a Kloset Store over HTTP is useful when:\nExposing a store hosted on a NAS to other machines Accessing a local store from a remote system Bridging environments without copying data Limitations # The server exposes only the encrypted store. Clients must provide the passphrase when accessing it. TLS is not supported natively. If encryption in transit is required, use a reverse proxy such as Nginx. ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/guides/serving-a-kloset-store-over-http/","section":"Docs","summary":"Expose a Kloset Store over HTTP using the plakar server command.","title":"Serving a Kloset Store over HTTP","type":"docs"},{"content":" Serving a Kloset Store over HTTP # Plakar can expose a Kloset Store over HTTP using the plakar server command. This allows other machines to access the store remotely.\nBy default, a Kloset store is only accessed locally. Serving it over HTTP lets other machines back up to or restore from the same store without copying data around. This is useful when the store lives on a NAS, a dedicated backup server, or any machine you want to treat as a central backup target.\nThis guide shows how to start an HTTP server for a Kloset Store and access it from another Plakar client.\nStarting an HTTP server # Assume you have a Kloset Store located at /var/backups. You can interact with it locally using commands like:\n$ plakar at /var/backups ls By default, Plakar listens on http://localhost:9876. To expose this store over HTTP, start the server by running:\n$ plakar at /var/backups server Accessing the store over HTTP # To use a Kloset Store over HTTP, install the HTTP integration:\n$ plakar pkg add http You can now access the store via its HTTP address:\n$ plakar at http://localhost:9876 ls All standard read operations work exactly as they do with a local store.\nEnabling delete operations # For safety, delete operations are disabled by default when serving a store over HTTP. If you explicitly want to allow deletions, start the server with:\n$ plakar at /var/backups server -allow-delete Typical use cases # Serving a Kloset Store over HTTP is useful when:\nExposing a store hosted on a NAS to other machines Accessing a local store from a remote system Bridging environments without copying data Limitations # The server exposes only the encrypted store. Clients must provide the passphrase when accessing it. TLS is not supported natively. If encryption in transit is required, use a reverse proxy such as Nginx. ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/serving-a-kloset-store-over-http/","section":"Docs","summary":"Expose a Kloset Store over HTTP using the plakar server command.","title":"Serving a Kloset Store over HTTP","type":"docs"},{"content":" Serving a Kloset Store over HTTP # Plakar can expose a Kloset Store over HTTP using the plakar server command. This allows other machines to access the store remotely.\nBy default, a Kloset store is only accessed locally. Serving it over HTTP lets other machines back up to or restore from the same store without copying data around. This is useful when the store lives on a NAS, a dedicated backup server, or any machine you want to treat as a central backup target.\nThis guide shows how to start an HTTP server for a Kloset Store and access it from another Plakar client.\nStarting an HTTP server # Assume you have a Kloset Store located at /var/backups. You can interact with it locally using commands like:\n$ plakar at /var/backups ls By default, Plakar listens on http://localhost:9876. To expose this store over HTTP, start the server by running:\n$ plakar at /var/backups server You can now access the store via its HTTP address:\n$ plakar at http://localhost:9876 ls All standard read operations work exactly as they do with a local store.\nEnabling delete operations # For safety, delete operations are disabled by default when serving a store over HTTP. If you explicitly want to allow deletions, start the server with:\n$ plakar at /var/backups server -allow-delete Typical use cases # Serving a Kloset Store over HTTP is useful when:\nExposing a store hosted on a NAS to other machines Accessing a local store from a remote system Bridging environments without copying data Limitations # The server exposes only the encrypted store. Clients must provide the passphrase when accessing it. TLS is not supported natively. If encryption in transit is required, use a reverse proxy such as Nginx. ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/serving-a-kloset-store-over-http/","section":"Docs","summary":"Expose a Kloset Store over HTTP using the plakar server command.","title":"Serving a Kloset Store over HTTP","type":"docs"},{"content":" Synchronize multiple copies # In this guide, we will create a second copy of the Kloset Store created in Part 1 of the Quickstart.\nThis second copy will be stored on a S3-compatible storage service, but the same logic applies to any other storage location supported by Plakar.\nRequirements # This guide assumes that:\nPlakar is installed on your system (see the Installation guide). A Kloset Store exists on your local filesystem at $HOME/backups with at least one snapshot (see Part 1 of the Quickstart). Why create multiple copies? # Keeping multiple backup copies dramatically reduces the risk of total data loss by turning a realistic single-site failure into an extremely unlikely event when data is replicated across independent locations (see the why should you keep several copies of your backups guide).\nPlakar is designed to make it easy to synchronize a Kloset Store to another location.\nLogin to install pre-built integrations # By default, Plakar works without requiring you to create an account or log in. You can back up and restore your data with just a few commands, with no external services involved.\nHowever, logging in unlocks optional features that improve usability and monitoring. Among these features, it adds the ability to install the pre-built integrations hosted on our infrastructure.\nIn this quickstart, we will use the S3 integration, which requires the integration to be installed first. Therefore, we need to log in.\nYou can log in through the CLI:\nWith Email With GitHub plakar login -email \u0026lt;youremailaddress@example.com\u0026gt; Substitute in your own email address and follow the prompt. You can then check your email and follow the link sent from plakar.io.\nplakar login -github Your default browser will open a new tab where you can authorize plakar to use your GitHub account for authentication. Follow the prompts to complete the login.\nInstall the S3 integration # Run the following command to install the S3 integration:\n$ plakar pkg add s3 If you already have the S3 integration installed and want to update it, remove the existing version first and then install the latest one:\n$ plakar pkg rm s3 $ plakar pkg add s3 You can list all installed integrations to confirm the S3 integration was installed successfully:\n$ plakar pkg list s3@v1.0.7 Set up S3-compatible storage # For this quickstart, we will use a local MinIO instance as our S3-compatible storage service. If you have access to an actual S3-compatible service (such as AWS S3, Wasabi, Backblaze B2, etc.), you can skip this step and use the credentials provided by your service provider instead.\nRun the following command to start a MinIO instance using Docker:\n$ docker run -d --name minio -p 9000:9000 -p 9001:9001 quay.io/minio/minio server /data --console-address \u0026#34;:9001\u0026#34; This command starts a MinIO server accessible at http://localhost:9000, with a web interface available at http://localhost:9001. The default access key is minioadmin and the secret key is also minioadmin.\nConfigure Plakar # To let Plakar know about the S3 storage, we need to configure a new store. We will call this store s3-backups.\nRun the following command to create the new store:\n$ plakar store add s3-backups \\ location=s3://localhost:9000/mybucket \\ access_key=minioadmin \\ secret_access_key=minioadmin \\ use_tls=false This command creates a new store named s3-backups that points to the mybucket bucket on the MinIO server running at localhost:9000. It uses the access key and secret key provided above. The use_tls=false option is specified because we are connecting to a local server without TLS.\nuse_tls should be omitted or set to true when connecting to production S3-compatible services that use TLS.\nInitialize the Kloset Store # For now, the Kloset Store points to a bucket that does not exist yet. We need to create it by initializing the store:\n$ plakar at \u0026#34;@s3-backups\u0026#34; create This command initializes the Kloset Store at the S3 location, creating the necessary bucket and structure to hold the backups.\nNote the @ symbol before the store name. This is an alias, which indicates that we are referencing a Kloset Store from the configuration. Without the @, Plakar would interpret s3-backups as a filesystem path.\nEscaping on Windows On Windows, make sure to use double quotes (\u0026quot;) around the store name to avoid issues with the @ symbol being interpreted by the shell. On Unix-like systems, quotes are often unnecessary.\nThe passphrase prompt will appear: you do not have to enter the same passphrase as the local Kloset Store, but you can if you want to.\nPlakar will automatically create the bucket if it does not already exist.\nList snapshots # If you list the snapshots in this new store, you will see that it is currently empty:\nplakar at \u0026#34;@s3-backups\u0026#34; ls Synchronize the local Kloset Store to S3 # Now, let\u0026rsquo;s synchronize the local Kloset Store at $HOME/backups to the S3 Kloset Store we just created.\nRun the following command:\n$ plakar at $HOME/backups sync to \u0026#34;@s3-backups\u0026#34; info: Synchronizing snapshot 772fba5f575272ba8742e63c6ec1878623900d158c5de4b20b854a0aa15a7b47 from fs:///Users/niluje/backups to s3://localhost:9000/mybucket info: Synchronization of 772fba5f575272ba8742e63c6ec1878623900d158c5de4b20b854a0aa15a7b47 finished info: sync: synchronization from fs:///Users/niluje/backups to s3://localhost:9000/mybucket completed: 1 snapshots synchronized The command transfers all the snapshots from the local Kloset Store to the S3 Kloset Store.\nTo verify that the synchronization was successful, you can list the snapshots in the S3 Kloset Store again:\n$ plakar at \u0026#34;@s3-backups\u0026#34; ls 2025-12-15T21:09:32Z 772fba5f 2.9 MiB 0s /private/etc Notice that the snapshot ID is the same as the one in the local Kloset Store, confirming that the data has been successfully copied.\nIf you run the sync command again, you will see that no data is transferred because the destination store already contains all the snapshots from the source store.\n$ plakar at $HOME/backups sync to \u0026#34;@s3-backups\u0026#34; destination store passphrase: info: sync: synchronization from fs:///Users/niluje/backups to s3://localhost:9000/mybucket completed: 0 snapshots synchronized In production, you would typically run this command periodically to ensure that the S3 Kloset Store remains up-to-date with the local Kloset Store. If there are no new snapshots to transfer, the command will complete quickly without transferring any data.\nA remote Kloset Store works just like a local one # As a side note, you can use the remote Kloset Store just as you would use the local one:\nto run the UI, plakar at \u0026quot;@s3-backups\u0026quot; ui to verify integrity, plakar at \u0026quot;@s3-backups\u0026quot; check to restore files, plakar at \u0026quot;@s3-backups\u0026quot; restore -to /tmp/restore \u0026lt;snapshot-id\u0026gt; Check plakar help to see all the available commands.\nCongratulations! # You have successfully created a second copy of your Kloset Store on S3-compatible storage. Your backups are now stored in two independent locations.\nIn this example, we used a local MinIO instance for demonstration purposes. In a real-world scenario, you would use a reliable S3-compatible service to have your backups stored offsite.\nWith Plakar, hosting your backups on S3 is as easy as hosting them locally. In addition, many more store integrations are available for hosting your backups in other locations.\nNext steps # Modern infrastructure involves more than just filesystems. In the next part of this Quickstart, we will see how to back up an S3 bucket using plakar.\nContinue to Part 3 of the Quickstart to create a backup for your non-filesystem data. ","date":"11 March 2026","externalUrl":null,"permalink":"/docs/main/quickstart/synchronize-copies/","section":"Docs","summary":"Create a second copy of your Kloset Store to improve the durability of your backups.","title":"Synchronize multiple copies","type":"docs"},{"content":" Synchronize multiple copies # In this guide, we will create a second copy of the Kloset Store created in Part 1 of the Quickstart.\nThis second copy will be stored on a S3-compatible storage service, but the same logic applies to any other storage location supported by Plakar.\nRequirements # This guide assumes that:\nPlakar is installed on your system (see the Installation guide). A Kloset Store exists on your local filesystem at $HOME/backups with at least one snapshot (see Part 1 of the Quickstart). Why create multiple copies? # Keeping multiple backup copies dramatically reduces the risk of total data loss by turning a realistic single-site failure into an extremely unlikely event when data is replicated across independent locations (see the why should you keep several copies of your backups guide).\nPlakar is designed to make it easy to synchronize a Kloset Store to another location.\nLogin to install pre-built integrations # By default, Plakar works without requiring you to create an account or log in. You can back up and restore your data with just a few commands, with no external services involved.\nHowever, logging in unlocks optional features that improve usability and monitoring. Among these features, it adds the ability to install the pre-built integrations hosted on our infrastructure.\nIn this quickstart, we will use the S3 integration, which requires the integration to be installed first. Therefore, we need to log in.\nYou can log in through the CLI:\nWith Email With GitHub plakar login -email \u0026lt;youremailaddress@example.com\u0026gt; Substitute in your own email address and follow the prompt. You can then check your email and follow the link sent from plakar.io.\nplakar login -github Your default browser will open a new tab where you can authorize plakar to use your GitHub account for authentication. Follow the prompts to complete the login.\nInstall the S3 integration # Run the following command to install the S3 integration:\n$ plakar pkg add s3 If you already have the S3 integration installed and want to update it, remove the existing version first and then install the latest one:\n$ plakar pkg rm s3 $ plakar pkg add s3 You can list all installed integrations to confirm the S3 integration was installed successfully:\n$ plakar pkg list s3@v1.0.7 Set up S3-compatible storage # For this quickstart, we will use a local MinIO instance as our S3-compatible storage service. If you have access to an actual S3-compatible service (such as AWS S3, Wasabi, Backblaze B2, etc.), you can skip this step and use the credentials provided by your service provider instead.\nRun the following command to start a MinIO instance using Docker:\n$ docker run -d --name minio -p 9000:9000 -p 9001:9001 quay.io/minio/minio server /data --console-address \u0026#34;:9001\u0026#34; This command starts a MinIO server accessible at http://localhost:9000, with a web interface available at http://localhost:9001. The default access key is minioadmin and the secret key is also minioadmin.\nConfigure Plakar # To let Plakar know about the S3 storage, we need to configure a new store. We will call this store s3-backups.\nRun the following command to create the new store:\n$ plakar store add s3-backups \\ location=s3://localhost:9000/mybucket \\ access_key=minioadmin \\ secret_access_key=minioadmin \\ use_tls=false This command creates a new store named s3-backups that points to the mybucket bucket on the MinIO server running at localhost:9000. It uses the access key and secret key provided above. The use_tls=false option is specified because we are connecting to a local server without TLS.\nuse_tls should be omitted or set to true when connecting to production S3-compatible services that use TLS.\nInitialize the Kloset Store # For now, the Kloset Store points to a bucket that does not exist yet. We need to create it by initializing the store:\n$ plakar at \u0026#34;@s3-backups\u0026#34; create This command initializes the Kloset Store at the S3 location, creating the necessary bucket and structure to hold the backups.\nNote the @ symbol before the store name. This is an alias, which indicates that we are referencing a Kloset Store from the configuration. Without the @, Plakar would interpret s3-backups as a filesystem path.\nEscaping on Windows On Windows, make sure to use double quotes (\u0026quot;) around the store name to avoid issues with the @ symbol being interpreted by the shell. On Unix-like systems, quotes are often unnecessary.\nThe passphrase prompt will appear: you do not have to enter the same passphrase as the local Kloset Store, but you can if you want to.\nPlakar will automatically create the bucket if it does not already exist.\nList snapshots # If you list the snapshots in this new store, you will see that it is currently empty:\nplakar at \u0026#34;@s3-backups\u0026#34; ls Synchronize the local Kloset Store to S3 # Now, let\u0026rsquo;s synchronize the local Kloset Store at $HOME/backups to the S3 Kloset Store we just created.\nRun the following command:\n$ plakar at $HOME/backups sync to \u0026#34;@s3-backups\u0026#34; info: Synchronizing snapshot 772fba5f575272ba8742e63c6ec1878623900d158c5de4b20b854a0aa15a7b47 from fs:///Users/niluje/backups to s3://localhost:9000/mybucket info: Synchronization of 772fba5f575272ba8742e63c6ec1878623900d158c5de4b20b854a0aa15a7b47 finished info: sync: synchronization from fs:///Users/niluje/backups to s3://localhost:9000/mybucket completed: 1 snapshots synchronized The command transfers all the snapshots from the local Kloset Store to the S3 Kloset Store.\nTo verify that the synchronization was successful, you can list the snapshots in the S3 Kloset Store again:\n$ plakar at \u0026#34;@s3-backups\u0026#34; ls 2025-12-15T21:09:32Z 772fba5f 2.9 MiB 0s /private/etc Notice that the snapshot ID is the same as the one in the local Kloset Store, confirming that the data has been successfully copied.\nIf you run the sync command again, you will see that no data is transferred because the destination store already contains all the snapshots from the source store.\n$ plakar at $HOME/backups sync to \u0026#34;@s3-backups\u0026#34; destination store passphrase: info: sync: synchronization from fs:///Users/niluje/backups to s3://localhost:9000/mybucket completed: 0 snapshots synchronized In production, you would typically run this command periodically to ensure that the S3 Kloset Store remains up-to-date with the local Kloset Store. If there are no new snapshots to transfer, the command will complete quickly without transferring any data.\nA remote Kloset Store works just like a local one # As a side note, you can use the remote Kloset Store just as you would use the local one:\nto run the UI, plakar at \u0026quot;@s3-backups\u0026quot; ui to verify integrity, plakar at \u0026quot;@s3-backups\u0026quot; check to restore files, plakar at \u0026quot;@s3-backups\u0026quot; restore -to /tmp/restore \u0026lt;snapshot-id\u0026gt; Check plakar help to see all the available commands.\nCongratulations! # You have successfully created a second copy of your Kloset Store on S3-compatible storage. Your backups are now stored in two independent locations.\nIn this example, we used a local MinIO instance for demonstration purposes. In a real-world scenario, you would use a reliable S3-compatible service to have your backups stored offsite.\nWith Plakar, hosting your backups on S3 is as easy as hosting them locally. In addition, many more store integrations are available for hosting your backups in other locations.\nNext steps # Modern infrastructure involves more than just filesystems. In the next part of this Quickstart, we will see how to back up an S3 bucket using plakar.\nContinue to Part 3 of the Quickstart to create a backup for your non-filesystem data. ","date":"11 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/quickstart/synchronize-copies/","section":"Docs","summary":"Create a second copy of your Kloset Store to improve the durability of your backups.","title":"Synchronize multiple copies","type":"docs"},{"content":" Synchronize multiple copies # In this guide, we will create a second copy of the Kloset Store created in Part 1 of the Quickstart.\nThis second copy will be stored on a S3-compatible storage service, but the same logic applies to any other storage location supported by Plakar.\nRequirements # This guide assumes that:\nPlakar is installed on your system (see the Installation guide). A Kloset Store exists on your local filesystem at $HOME/backups with at least one snapshot (see Part 1 of the Quickstart). Why create multiple copies? # Keeping multiple backup copies dramatically reduces the risk of total data loss by turning a realistic single-site failure into an extremely unlikely event when data is replicated across independent locations (see the why should you keep several copies of your backups guide).\nPlakar is designed to make it easy to synchronize a Kloset Store to another location.\nLogin to install pre-built integrations # By default, Plakar works without requiring you to create an account or log in. You can back up and restore your data with just a few commands, with no external services involved.\nHowever, logging in unlocks optional features that improve usability and monitoring. Among these features, it adds the ability to install the pre-built integrations hosted on our infrastructure.\nIn this quickstart, we will use the S3 integration, which requires the integration to be installed first. Therefore, we need to log in.\nYou can log in through the CLI:\nWith Email With GitHub plakar login -email \u0026lt;youremailaddress@example.com\u0026gt; Substitute in your own email address and follow the prompt. You can then check your email and follow the link sent from plakar.io.\nplakar login -github Your default browser will open a new tab where you can authorize plakar to use your GitHub account for authentication. Follow the prompts to complete the login.\nInstall the S3 integration # Run the following command to install the S3 integration:\n$ plakar pkg add s3 If you already have the S3 integration installed and want to update it, remove the existing version first and then install the latest one:\n$ plakar pkg rm s3 $ plakar pkg add s3 You can list all installed integrations to confirm the S3 integration was installed successfully:\n$ plakar pkg list s3@v1.0.7 Set up S3-compatible storage # For this quickstart, we will use a local MinIO instance as our S3-compatible storage service. If you have access to an actual S3-compatible service (such as AWS S3, Wasabi, Backblaze B2, etc.), you can skip this step and use the credentials provided by your service provider instead.\nRun the following command to start a MinIO instance using Docker:\n$ docker run -d --name minio -p 9000:9000 -p 9001:9001 quay.io/minio/minio server /data --console-address \u0026#34;:9001\u0026#34; This command starts a MinIO server accessible at http://localhost:9000, with a web interface available at http://localhost:9001. The default access key is minioadmin and the secret key is also minioadmin.\nConfigure Plakar # To let Plakar know about the S3 storage, we need to configure a new store. We will call this store s3-backups.\nRun the following command to create the new store:\n$ plakar store add s3-backups \\ location=s3://localhost:9000/mybucket \\ access_key=minioadmin \\ secret_access_key=minioadmin \\ use_tls=false This command creates a new store named s3-backups that points to the mybucket bucket on the MinIO server running at localhost:9000. It uses the access key and secret key provided above. The use_tls=false option is specified because we are connecting to a local server without TLS.\nuse_tls should be omitted or set to true when connecting to production S3-compatible services that use TLS.\nInitialize the Kloset Store # For now, the Kloset Store points to a bucket that does not exist yet. We need to create it by initializing the store:\n$ plakar at \u0026#34;@s3-backups\u0026#34; create This command initializes the Kloset Store at the S3 location, creating the necessary bucket and structure to hold the backups.\nNote the @ symbol before the store name. This is an alias, which indicates that we are referencing a Kloset Store from the configuration. Without the @, Plakar would interpret s3-backups as a filesystem path.\nEscaping on Windows On Windows, make sure to use double quotes (\u0026quot;) around the store name to avoid issues with the @ symbol being interpreted by the shell. On Unix-like systems, quotes are often unnecessary.\nThe passphrase prompt will appear: you do not have to enter the same passphrase as the local Kloset Store, but you can if you want to.\nPlakar will automatically create the bucket if it does not already exist.\nList snapshots # If you list the snapshots in this new store, you will see that it is currently empty:\nplakar at \u0026#34;@s3-backups\u0026#34; ls Synchronize the local Kloset Store to S3 # Now, let\u0026rsquo;s synchronize the local Kloset Store at $HOME/backups to the S3 Kloset Store we just created.\nRun the following command:\n$ plakar at $HOME/backups sync to \u0026#34;@s3-backups\u0026#34; info: Synchronizing snapshot 772fba5f575272ba8742e63c6ec1878623900d158c5de4b20b854a0aa15a7b47 from fs:///Users/niluje/backups to s3://localhost:9000/mybucket info: Synchronization of 772fba5f575272ba8742e63c6ec1878623900d158c5de4b20b854a0aa15a7b47 finished info: sync: synchronization from fs:///Users/niluje/backups to s3://localhost:9000/mybucket completed: 1 snapshots synchronized The command transfers all the snapshots from the local Kloset Store to the S3 Kloset Store.\nTo verify that the synchronization was successful, you can list the snapshots in the S3 Kloset Store again:\n$ plakar at \u0026#34;@s3-backups\u0026#34; ls 2025-12-15T21:09:32Z 772fba5f 2.9 MiB 0s /private/etc Notice that the snapshot ID is the same as the one in the local Kloset Store, confirming that the data has been successfully copied.\nIf you run the sync command again, you will see that no data is transferred because the destination store already contains all the snapshots from the source store.\n$ plakar at $HOME/backups sync to \u0026#34;@s3-backups\u0026#34; destination store passphrase: info: sync: synchronization from fs:///Users/niluje/backups to s3://localhost:9000/mybucket completed: 0 snapshots synchronized In production, you would typically run this command periodically to ensure that the S3 Kloset Store remains up-to-date with the local Kloset Store. If there are no new snapshots to transfer, the command will complete quickly without transferring any data.\nA remote Kloset Store works just like a local one # As a side note, you can use the remote Kloset Store just as you would use the local one:\nto run the UI, plakar at \u0026quot;@s3-backups\u0026quot; ui to verify integrity, plakar at \u0026quot;@s3-backups\u0026quot; check to restore files, plakar at \u0026quot;@s3-backups\u0026quot; restore -to /tmp/restore \u0026lt;snapshot-id\u0026gt; Check plakar help to see all the available commands.\nCongratulations! # You have successfully created a second copy of your Kloset Store on S3-compatible storage. Your backups are now stored in two independent locations.\nIn this example, we used a local MinIO instance for demonstration purposes. In a real-world scenario, you would use a reliable S3-compatible service to have your backups stored offsite.\nWith Plakar, hosting your backups on S3 is as easy as hosting them locally. In addition, many more store integrations are available for hosting your backups in other locations.\nNext steps # Modern infrastructure involves more than just filesystems. In the next part of this Quickstart, we will see how to back up an S3 bucket using plakar.\nContinue to Part 3 of the Quickstart to create a backup for your non-filesystem data. ","date":"11 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/quickstart/synchronize-copies/","section":"Docs","summary":"Create a second copy of your Kloset Store to improve the durability of your backups.","title":"Synchronize multiple copies","type":"docs"},{"content":" Commands # Welcome to the Plakar commands reference. This section provides detailed documentation for all available Plakar commands, including usage, options, and examples. You can browse the command documentation here, or access it directly from your terminal using plakar help. This allows you to explore command behavior and options whether you are online or working locally. Below is the complete list of commands. Select any command to view its detailed documentation.\narchive Create an archive from a Plakar snapshot\nbackup Create a new snapshot in a Kloset store\ncat Display file contents from a Plakar snapshot\ncheck Check data integrity in a Plakar repository\ncreate Create a new Plakar repository\ndestination Manage Plakar restore destination configuration\ndiag Display detailed information about Plakar internal structures\ndiff Show differences between files in a Plakar snapshots\ndigest Compute digests for files in a Plakar snapshot\ndup Duplicates an existing snapshot with a different ID\ninfo Display detailed information about internal structures\nlocate Find filenames in a Plakar snapshot\nlogin Authenticate to Plakar services\nlogout Log out from Plakar services\nls List snapshots and their contents in a Plakar repository\nmaintenance Remove unused data from a Plakar repository\nmount Mount Plakar snapshots as read-only filesystem\npkg-add Install Plakar plugins\npkg-build Build Plakar plugins from source\npkg-create Package a plugin\npkg-manifest.yaml Manifest for plugin assemblation\npkg-recipe.yaml Recipe to build Plakar plugins from source\npkg-rm Uninstall Plakar plugins\npkg-show Show installed Plakar plugins\nplakar effortless backups\npolicy Manage Plakar retention policies\nprune Prune snapshots according to a policy\nptar generate a self-contained Kloset archive (.ptar)\nquery query flags shared among many Plakar subcommands\nrestore Restore files from a Plakar snapshot\nrm Remove snapshots from a Plakar repository\nscheduler Run the Plakar scheduler\nserver Start a Plakar server\nservice Manage optional Plakar-connected services\nsource Manage Plakar backup source configuration\nstore Manage Plakar store configurations\nsync Synchronize snapshots between Plakar repositories\ntoken-create Create a token to authenticate to Plakar services\nui Serve the Plakar web user interface\nversion Display the current Plakar version\n","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/","section":"Docs","summary":"Reference for all Plakar commands. Browse detailed documentation for each command, including usage, options, and examples. Access help online or directly from your terminal.","title":"Commands","type":"docs"},{"content":" Commands # Welcome to the Plakar commands reference. This section provides detailed documentation for all available Plakar commands, including usage, options, and examples. You can browse the command documentation here, or access it directly from your terminal using plakar help. This allows you to explore command behavior and options whether you are online or working locally. Below is the complete list of commands. Select any command to view its detailed documentation.\nagent Run the Plakar agent\narchive Create an archive from a Plakar snapshot\nbackup Create a new snapshot in a Kloset store\ncat Display file contents from a Plakar snapshot\ncheck Check data integrity in a Plakar repository\nclone Clone a Plakar repository to a new location\ncreate Create a new Plakar repository\ndestination Manage Plakar restore destination configuration\ndiag Display detailed information about Plakar internal structures\ndiff Show differences between files in a Plakar snapshots\ndigest Compute digests for files in a Plakar snapshot\ndup Duplicates an existing snapshot with a different ID\ninfo Display detailed information about internal structures\nlocate Find filenames in a Plakar snapshot\nlogin Authenticate to Plakar services\nlogout Log out from Plakar services\nls List snapshots and their contents in a Plakar repository\nmaintenance Remove unused data from a Plakar repository\nmount Mount Plakar snapshots as read-only filesystem\npkg-add Install Plakar plugins\npkg-build Build Plakar plugins from source\npkg-create Package a plugin\npkg-manifest.yaml Manifest for plugin assemblation\npkg-recipe.yaml Recipe to build Plakar plugins from source\npkg-rm Uninstall Plakar plugins\npkg-show Show installed Plakar plugins\nplakar effortless backups\npolicy Manage Plakar retention policies\nprune Prune snapshots according to a policy\nptar generate a self-contained Kloset archive (.ptar)\nquery query flags shared among many Plakar subcommands\nrestore Restore files from a Plakar snapshot\nrm Remove snapshots from a Plakar repository\nscheduler Run the Plakar scheduler\nserver Start a Plakar server\nservice Manage optional Plakar-connected services\nsource Manage Plakar backup source configuration\nstore Manage Plakar store configurations\nsync Synchronize snapshots between Plakar repositories\ntoken Manage Plakar tokens\nui Serve the Plakar web user interface\nversion Display the current Plakar version\n","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/","section":"Docs","summary":"Reference for all Plakar commands. Browse detailed documentation for each command, including usage, options, and examples. Access help online or directly from your terminal.","title":"Commands","type":"docs"},{"content":" Commands # Welcome to the Plakar commands reference. This section provides detailed documentation for all available Plakar commands, including usage, options, and examples. You can browse the command documentation here, or access it directly from your terminal using plakar help. This allows you to explore command behavior and options whether you are online or working locally. Below is the complete list of commands. Select any command to view its detailed documentation.\nagent Run the Plakar agent\narchive Create an archive from a Plakar snapshot\nbackup Create a new snapshot in a Kloset store\ncat Display file contents from a Plakar snapshot\ncheck Check data integrity in a Plakar repository\nclone Clone a Plakar repository to a new location\ncreate Create a new Plakar repository\ndestination Manage Plakar restore destination configuration\ndiag Display detailed information about Plakar internal structures\ndiff Show differences between files in a Plakar snapshots\ndigest Compute digests for files in a Plakar snapshot\ndup Duplicates an existing snapshot with a different ID\ninfo Display detailed information about internal structures\nlocate Find filenames in a Plakar snapshot\nlogin Authenticate to Plakar services\nlogout Log out from Plakar services\nls List snapshots and their contents in a Plakar repository\nmaintenance Remove unused data from a Plakar repository\nmount Mount Plakar snapshots as read-only filesystem\npkg-add Install Plakar plugins\npkg-build Build Plakar plugins from source\npkg-create Package a plugin\npkg-manifest.yaml Manifest for plugin assemblation\npkg-recipe.yaml Recipe to build Plakar plugins from source\npkg-rm Uninstall Plakar plugins\npkg-show Show installed Plakar plugins\nplakar effortless backups\npolicy Manage Plakar retention policies\nprune Prune snapshots according to a policy\nptar generate a self-contained Kloset archive (.ptar)\nquery query flags shared among many Plakar subcommands\nrestore Restore files from a Plakar snapshot\nrm Remove snapshots from a Plakar repository\nscheduler Run the Plakar scheduler\nserver Start a Plakar server\nservice Manage optional Plakar-connected services\nsource Manage Plakar backup source configuration\nstore Manage Plakar store configurations\nsync Synchronize snapshots between Plakar repositories\ntoken Manage Plakar tokens\nui Serve the Plakar web user interface\nversion Display the current Plakar version\n","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/","section":"Docs","summary":"Reference for all Plakar commands. Browse detailed documentation for each command, including usage, options, and examples. Access help online or directly from your terminal.","title":"Commands","type":"docs"},{"content":" Commands # Welcome to the Plakar commands reference. This section provides detailed documentation for all available Plakar commands, including usage, options, and examples. You can browse the command documentation here, or access it directly from your terminal using plakar help. This allows you to explore command behavior and options whether you are online or working locally. Below is the complete list of commands. Select any command to view its detailed documentation.\narchive Create an archive from a Plakar snapshot\nbackup Create a new snapshot in a Kloset store\ncat Display file contents from a Plakar snapshot\ncheck Check data integrity in a Plakar repository\ncreate Create a new Plakar repository\ndestination Manage Plakar restore destination configuration\ndiag Display detailed information about Plakar internal structures\ndiff Show differences between files in a Plakar snapshots\ndigest Compute digests for files in a Plakar snapshot\ndup Duplicates an existing snapshot with a different ID\ninfo Display detailed information about internal structures\nlocate Find filenames in a Plakar snapshot\nlogin Authenticate to Plakar services\nlogout Log out from Plakar services\nls List snapshots and their contents in a Plakar repository\nmaintenance Remove unused data from a Plakar repository\nmount Mount Plakar snapshots as read-only filesystem\npkg-add Install Plakar plugins\npkg-build Build Plakar plugins from source\npkg-create Package a plugin\npkg-manifest.yaml Manifest for plugin assemblation\npkg-recipe.yaml Recipe to build Plakar plugins from source\npkg-rm Uninstall Plakar plugins\npkg-show Show installed Plakar plugins\nplakar effortless backups\npolicy Manage Plakar retention policies\nprune Prune snapshots according to a policy\nptar generate a self-contained Kloset archive (.ptar)\nquery query flags shared among many Plakar subcommands\nrestore Restore files from a Plakar snapshot\nrm Remove snapshots from a Plakar repository\nscheduler Run the Plakar scheduler\nserver Start a Plakar server\nservice Manage optional Plakar-connected services\nsource Manage Plakar backup source configuration\nstore Manage Plakar store configurations\nsync Synchronize snapshots between Plakar repositories\ntoken Manage Plakar tokens\nui Serve the Plakar web user interface\nversion Display the current Plakar version\n","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/","section":"Docs","summary":"Reference for all Plakar commands. Browse detailed documentation for each command, including usage, options, and examples. Access help online or directly from your terminal.","title":"Commands","type":"docs"},{"content":" iCloud Drive # The Plakar iCloud Drive integration allows you to interact with iCloud Drive, Apple\u0026rsquo;s cloud storage service, using Rclone.\nRclone is a command-line program to manage files on cloud storage, and supports iCloud Drive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one iCloud Drive remote must be configured. Typical use cases\nCold backup of iCloud Drive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with iCloud Drive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # iCloud Drive login Due to current limitations in Rclone, logging in to iCloud Drive is not possible at the time of writing. The steps below are provided for reference and future compatibility.\nInstall Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with iCloud Drive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for iCloud Drive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;iCloud Drive\u0026rdquo; from the list of supported storage providers. Enter your Apple ID Enter your password Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive To verify that you have access to your iCloud Drive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your iCloud Drive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with iCloud Drive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # At the time of writing and according to this GitHub issue, it is currently impossible to log in to iCloud Drive using Rclone.\nSee also # Rclone documentation for iCloud Drive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/main/integrations/iclouddrive/","section":"Docs","summary":"Back up and restore your iCloud Drive with Plakar, and host Kloset stores in iCloud Drive.","title":"iCloud Drive","type":"docs"},{"content":" iCloud Drive # The Plakar iCloud Drive integration allows you to interact with iCloud Drive, Apple\u0026rsquo;s cloud storage service, using Rclone.\nRclone is a command-line program to manage files on cloud storage, and supports iCloud Drive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one iCloud Drive remote must be configured. Typical use cases\nCold backup of iCloud Drive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with iCloud Drive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # iCloud Drive login Due to current limitations in Rclone, logging in to iCloud Drive is not possible at the time of writing. The steps below are provided for reference and future compatibility.\nInstall Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with iCloud Drive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for iCloud Drive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;iCloud Drive\u0026rdquo; from the list of supported storage providers. Enter your Apple ID Enter your password Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive To verify that you have access to your iCloud Drive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your iCloud Drive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with iCloud Drive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # At the time of writing and according to this GitHub issue, it is currently impossible to log in to iCloud Drive using Rclone.\nSee also # Rclone documentation for iCloud Drive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/integrations/iclouddrive/","section":"Docs","summary":"Back up and restore your iCloud Drive with Plakar, and host Kloset stores in iCloud Drive.","title":"iCloud Drive","type":"docs"},{"content":" iCloud Drive # The Plakar iCloud Drive integration allows you to interact with iCloud Drive, Apple\u0026rsquo;s cloud storage service, using Rclone.\nRclone is a command-line program to manage files on cloud storage, and supports iCloud Drive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one iCloud Drive remote must be configured. Typical use cases\nCold backup of iCloud Drive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with iCloud Drive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # iCloud Drive login Due to current limitations in Rclone, logging in to iCloud Drive is not possible at the time of writing. The steps below are provided for reference and future compatibility.\nInstall Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with iCloud Drive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for iCloud Drive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;iCloud Drive\u0026rdquo; from the list of supported storage providers. Enter your Apple ID Enter your password Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive To verify that you have access to your iCloud Drive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your iCloud Drive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with iCloud Drive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # At the time of writing and according to this GitHub issue, it is currently impossible to log in to iCloud Drive using Rclone.\nSee also # Rclone documentation for iCloud Drive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/integrations/iclouddrive/","section":"Docs","summary":"Back up and restore your iCloud Drive with Plakar, and host Kloset stores in iCloud Drive.","title":"iCloud Drive","type":"docs"},{"content":" iCloud Drive # The Plakar iCloud Drive integration allows you to interact with iCloud Drive, Apple\u0026rsquo;s cloud storage service, using Rclone.\nRclone is a command-line program to manage files on cloud storage, and supports iCloud Drive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one iCloud Drive remote must be configured. Typical use cases\nCold backup of iCloud Drive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with iCloud Drive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # iCloud Drive login Due to current limitations in Rclone, logging in to iCloud Drive is not possible at the time of writing. The steps below are provided for reference and future compatibility.\nInstall Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with iCloud Drive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for iCloud Drive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;iCloud Drive\u0026rdquo; from the list of supported storage providers. Enter your Apple ID Enter your password Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive To verify that you have access to your iCloud Drive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your iCloud Drive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with iCloud Drive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # At the time of writing and according to this GitHub issue, it is currently impossible to log in to iCloud Drive using Rclone.\nSee also # Rclone documentation for iCloud Drive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/integrations/iclouddrive/","section":"Docs","summary":"Back up and restore your iCloud Drive with Plakar, and host Kloset stores in iCloud Drive.","title":"iCloud Drive","type":"docs"},{"content":" Why you need to backup your SaaS # Modern SaaS platforms such as Google Drive, Dropbox, Notion, and others are highly available and reliable. They are designed to keep services online and data accessible. However, availability is not the same thing as data protection.\nThis page explains why relying solely on SaaS providers is not enough, and why independent backups are still necessary.\nAvailability is not protection # SaaS providers focus on keeping their platforms running:\nServers stay online Data is replicated Services remain accessible This protects the service, not your data. If data is deleted, corrupted, or altered, the platform will reliably synchronize that change everywhere.\nThe shared responsibility model # SaaS providers operate under a shared responsibility model.\nThe provider secures the infrastructure and platform You are responsible for your data This means providers generally do not protect you from:\nAccidental deletion Malicious or unintended changes Account compromise Ransomware or sync‑based corruption Compliance or long‑term retention needs If data is removed or modified legitimately from the provider’s point of view, it is often considered permanent.\nSync is not a backup # Most SaaS platforms are built around synchronization. Sync ensures that:\nChanges propagate instantly All devices see the same state Deletions are mirrored everywhere This is useful for collaboration, but dangerous for recovery. Mistakes, corruption, or malicious changes spread just as reliably as valid ones.\nVersion history has limits # Some SaaS platforms provide version history or trash retention.\nThese features help with short‑term mistakes, but they:\nAre time‑limited Depend on the same account and infrastructure Cannot guarantee long‑term recovery May not meet compliance or audit requirements Version history helps with recent errors, not with long‑term resilience.\nWhy independent backups matter # An independent backup creates a clean separation between:\nYour live SaaS data Your recovery data This separation ensures that:\nAccount issues do not affect backups Provider outages do not block recovery Data can be restored from any point in time Retention policies are under your control Independent backups ensure that your data remains recoverable, regardless of what happens inside the SaaS platform itself.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/explanations/why-should-i-backup-my-saas/","section":"Docs","summary":"Understand why cloud services do not replace backups, and why SaaS data requires independent protection.","title":"Why you need to backup your SaaS","type":"docs"},{"content":" Why you need to backup your SaaS # Modern SaaS platforms such as Google Drive, Dropbox, Notion, and others are highly available and reliable. They are designed to keep services online and data accessible. However, availability is not the same thing as data protection.\nThis page explains why relying solely on SaaS providers is not enough, and why independent backups are still necessary.\nAvailability is not protection # SaaS providers focus on keeping their platforms running:\nServers stay online Data is replicated Services remain accessible This protects the service, not your data. If data is deleted, corrupted, or altered, the platform will reliably synchronize that change everywhere.\nThe shared responsibility model # SaaS providers operate under a shared responsibility model.\nThe provider secures the infrastructure and platform You are responsible for your data This means providers generally do not protect you from:\nAccidental deletion Malicious or unintended changes Account compromise Ransomware or sync‑based corruption Compliance or long‑term retention needs If data is removed or modified legitimately from the provider’s point of view, it is often considered permanent.\nSync is not a backup # Most SaaS platforms are built around synchronization. Sync ensures that:\nChanges propagate instantly All devices see the same state Deletions are mirrored everywhere This is useful for collaboration, but dangerous for recovery. Mistakes, corruption, or malicious changes spread just as reliably as valid ones.\nVersion history has limits # Some SaaS platforms provide version history or trash retention.\nThese features help with short‑term mistakes, but they:\nAre time‑limited Depend on the same account and infrastructure Cannot guarantee long‑term recovery May not meet compliance or audit requirements Version history helps with recent errors, not with long‑term resilience.\nWhy independent backups matter # An independent backup creates a clean separation between:\nYour live SaaS data Your recovery data This separation ensures that:\nAccount issues do not affect backups Provider outages do not block recovery Data can be restored from any point in time Retention policies are under your control Independent backups ensure that your data remains recoverable, regardless of what happens inside the SaaS platform itself.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/explanations/why-should-i-backup-my-saas/","section":"Docs","summary":"Understand why cloud services do not replace backups, and why SaaS data requires independent protection.","title":"Why you need to backup your SaaS","type":"docs"},{"content":" Why you need to backup your SaaS # Modern SaaS platforms such as Google Drive, Dropbox, Notion, and others are highly available and reliable. They are designed to keep services online and data accessible. However, availability is not the same thing as data protection.\nThis page explains why relying solely on SaaS providers is not enough, and why independent backups are still necessary.\nAvailability is not protection # SaaS providers focus on keeping their platforms running:\nServers stay online Data is replicated Services remain accessible This protects the service, not your data. If data is deleted, corrupted, or altered, the platform will reliably synchronize that change everywhere.\nThe shared responsibility model # SaaS providers operate under a shared responsibility model.\nThe provider secures the infrastructure and platform You are responsible for your data This means providers generally do not protect you from:\nAccidental deletion Malicious or unintended changes Account compromise Ransomware or sync‑based corruption Compliance or long‑term retention needs If data is removed or modified legitimately from the provider’s point of view, it is often considered permanent.\nSync is not a backup # Most SaaS platforms are built around synchronization. Sync ensures that:\nChanges propagate instantly All devices see the same state Deletions are mirrored everywhere This is useful for collaboration, but dangerous for recovery. Mistakes, corruption, or malicious changes spread just as reliably as valid ones.\nVersion history has limits # Some SaaS platforms provide version history or trash retention.\nThese features help with short‑term mistakes, but they:\nAre time‑limited Depend on the same account and infrastructure Cannot guarantee long‑term recovery May not meet compliance or audit requirements Version history helps with recent errors, not with long‑term resilience.\nWhy independent backups matter # An independent backup creates a clean separation between:\nYour live SaaS data Your recovery data This separation ensures that:\nAccount issues do not affect backups Provider outages do not block recovery Data can be restored from any point in time Retention policies are under your control Independent backups ensure that your data remains recoverable, regardless of what happens inside the SaaS platform itself.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/explanations/why-should-i-backup-my-saas/","section":"Docs","summary":"Understand why cloud services do not replace backups, and why SaaS data requires independent protection.","title":"Why you need to backup your SaaS","type":"docs"},{"content":" Why you need to backup your SaaS # Modern SaaS platforms such as Google Drive, Dropbox, Notion, and others are highly available and reliable. They are designed to keep services online and data accessible. However, availability is not the same thing as data protection.\nThis page explains why relying solely on SaaS providers is not enough, and why independent backups are still necessary.\nAvailability is not protection # SaaS providers focus on keeping their platforms running:\nServers stay online Data is replicated Services remain accessible This protects the service, not your data. If data is deleted, corrupted, or altered, the platform will reliably synchronize that change everywhere.\nThe shared responsibility model # SaaS providers operate under a shared responsibility model.\nThe provider secures the infrastructure and platform You are responsible for your data This means providers generally do not protect you from:\nAccidental deletion Malicious or unintended changes Account compromise Ransomware or sync‑based corruption Compliance or long‑term retention needs If data is removed or modified legitimately from the provider’s point of view, it is often considered permanent.\nSync is not a backup # Most SaaS platforms are built around synchronization. Sync ensures that:\nChanges propagate instantly All devices see the same state Deletions are mirrored everywhere This is useful for collaboration, but dangerous for recovery. Mistakes, corruption, or malicious changes spread just as reliably as valid ones.\nVersion history has limits # Some SaaS platforms provide version history or trash retention.\nThese features help with short‑term mistakes, but they:\nAre time‑limited Depend on the same account and infrastructure Cannot guarantee long‑term recovery May not meet compliance or audit requirements Version history helps with recent errors, not with long‑term resilience.\nWhy independent backups matter # An independent backup creates a clean separation between:\nYour live SaaS data Your recovery data This separation ensures that:\nAccount issues do not affect backups Provider outages do not block recovery Data can be restored from any point in time Retention policies are under your control Independent backups ensure that your data remains recoverable, regardless of what happens inside the SaaS platform itself.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/explanations/why-should-i-backup-my-saas/","section":"Docs","summary":"Understand why cloud services do not replace backups, and why SaaS data requires independent protection.","title":"Why you need to backup your SaaS","type":"docs"},{"content":" References # This section provides comprehensive technical reference documentation for Plakar\u0026rsquo;s commands, configurations, file formats, and integrations. These pages are designed for quick lookup and detailed specification rather than learning or conceptual understanding.\nIf you\u0026rsquo;re looking for learning materials or conceptual explanations, see the Explanations section. For step-by-step instructions, see the Guides section.\nCommand line syntax How Plakar commands are structured, why flag order matters, and how to get help from the CLI.\nCommands Reference for all Plakar commands. Browse detailed documentation for each command, including usage, options, and examples. Access help online or directly from your terminal.\n","date":"18 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/","section":"Docs","summary":"Reference docs for Plakar","title":"References","type":"docs"},{"content":" References # This section provides comprehensive technical reference documentation for Plakar\u0026rsquo;s commands, configurations, file formats, and integrations. These pages are designed for quick lookup and detailed specification rather than learning or conceptual understanding.\nIf you\u0026rsquo;re looking for learning materials or conceptual explanations, see the Explanations section. For step-by-step instructions, see the Guides section.\nPlakar Ptar Command reference for creating and accessing Ptar archives: syntax, options, and examples for plakar ptar and related commands.\nCommand line syntax How Plakar commands are structured, why flag order matters, and how to get help from the CLI.\nGo Kloset SDK Go SDK reference for building Plakar integrations.\nCommands Reference for all Plakar commands. Browse detailed documentation for each command, including usage, options, and examples. Access help online or directly from your terminal.\n","date":"17 March 2026","externalUrl":null,"permalink":"/docs/main/references/","section":"Docs","summary":"Reference docs for Plakar","title":"References","type":"docs"},{"content":" References # This section provides comprehensive technical reference documentation for Plakar\u0026rsquo;s commands, configurations, file formats, and integrations. These pages are designed for quick lookup and detailed specification rather than learning or conceptual understanding.\nIf you\u0026rsquo;re looking for learning materials or conceptual explanations, see the Explanations section. For step-by-step instructions, see the Guides section.\nPlakar Ptar Command reference for creating and accessing Ptar archives: syntax, options, and examples for plakar ptar and related commands.\nCommand line syntax How Plakar commands are structured, why flag order matters, and how to get help from the CLI.\nCommands Reference for all Plakar commands. Browse detailed documentation for each command, including usage, options, and examples. Access help online or directly from your terminal.\n","date":"17 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/","section":"Docs","summary":"Reference docs for Plakar","title":"References","type":"docs"},{"content":" References # This section provides comprehensive technical reference documentation for Plakar\u0026rsquo;s commands, configurations, file formats, and integrations. These pages are designed for quick lookup and detailed specification rather than learning or conceptual understanding.\nIf you\u0026rsquo;re looking for learning materials or conceptual explanations, see the Explanations section. For step-by-step instructions, see the Guides section.\nPlakar Ptar Command reference for creating and accessing Ptar archives: syntax, options, and examples for plakar ptar and related commands.\nCommand line syntax How Plakar commands are structured, why flag order matters, and how to get help from the CLI.\nGo Kloset SDK Go SDK reference for building Plakar integrations.\nCommands Reference for all Plakar commands. Browse detailed documentation for each command, including usage, options, and examples. Access help online or directly from your terminal.\n","date":"17 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/","section":"Docs","summary":"Reference docs for Plakar","title":"References","type":"docs"},{"content":" Excluding files from a backup # This guide shows how to exclude files and directories from a backup using ignore patterns.\nWhy you\u0026rsquo;d need to exclude files from a backup # When backing up a source directory, not all files are worth preserving. Some are large and easily regenerated (build artifacts, dependency directories like node_modules or vendor), some are temporary (cache files, lock files, log files), and some are sensitive and should not be stored in a backup repository (secrets, local environment files).\nExcluding these from your backups reduces storage usage, speeds up backup and restore operations, and keeps your snapshots focused on only the important files.\nThe plakar backup command supports the -ignore and -ignore-file options to exclude files from a backup. These options use patterns with a syntax similar to .gitignore files.\nExamples # For the examples below, we assume the following directory structure in /var/files/demo:\n/var/files/demo ├── .cache │ └── index.db ├── .config ├── .env ├── .env.local ├── .git │ ├── config │ └── hooks ├── build │ ├── app.bin │ └── app.o ├── config │ ├── config.local.yaml │ └── config.yaml ├── Documents │ ├── Invoices │ │ ├── invoice1.pdf │ │ └── invoice2.pdf │ └── Reports │ ├── report1.docx │ └── report2.docx ├── logs │ ├── app.log │ └── error.log ├── node_modules │ ├── module1.js │ └── module2.js ├── Pictures │ ├── Family │ │ └── photo1.jpg │ └── Vacation │ └── photo2.png ├── src │ ├── main.go │ ├── secret.key │ └── utils.go ├── tmp │ ├── cache.db │ └── tempfile.tmp └── vendor └── github.com ├── lib1.go └── lib2.go And we assume the backup command is:\n$ plakar at /var/backups backup -ignore-file ./excludes.txt /var/files You can use -ignore multiple times with different patterns, or use -ignore-file with a file containing one pattern per line. The result is the same.\nIgnore the /var/files/demo/vendor directory only: # /var/files/demo/vendor Ignore the node_modules directory, wherever it is in the tree: # node_modules In this case, both /var/files/demo/node_modules and /var/files/demo/src/node_modules would be ignored.\nIgnore the file .git/config, wherever it is in the tree: # **/.git/config Here, the double asterisk ** is required.\nWhen a path pattern contains multiple parts, it is evaluated relative to the root directory /.\nExclude all files located in a tmp directory anywhere in the tree, except for cache.db: # **/tmp/* !**/tmp/cache.db Exclude everything, except .pdf and .docx files: # * !**/*.pdf !**/*.docx ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/main/guides/excluding-files-from-a-backup/","section":"Docs","summary":"Learn how to exclude files from a backup in Plakar","title":"Excluding files from a backup","type":"docs"},{"content":" Excluding files from a backup # This guide shows how to exclude files and directories from a backup using ignore patterns.\nWhy you\u0026rsquo;d need to exclude files from a backup # When backing up a source directory, not all files are worth preserving. Some are large and easily regenerated (build artifacts, dependency directories like node_modules or vendor), some are temporary (cache files, lock files, log files), and some are sensitive and should not be stored in a backup repository (secrets, local environment files).\nExcluding these from your backups reduces storage usage, speeds up backup and restore operations, and keeps your snapshots focused on only the important files.\nThe plakar backup command supports the -ignore and -ignore-file options to exclude files from a backup. These options use patterns with a syntax similar to .gitignore files.\nExamples # For the examples below, we assume the following directory structure in /var/files/demo:\n/var/files/demo ├── .cache │ └── index.db ├── .config ├── .env ├── .env.local ├── .git │ ├── config │ └── hooks ├── build │ ├── app.bin │ └── app.o ├── config │ ├── config.local.yaml │ └── config.yaml ├── Documents │ ├── Invoices │ │ ├── invoice1.pdf │ │ └── invoice2.pdf │ └── Reports │ ├── report1.docx │ └── report2.docx ├── logs │ ├── app.log │ └── error.log ├── node_modules │ ├── module1.js │ └── module2.js ├── Pictures │ ├── Family │ │ └── photo1.jpg │ └── Vacation │ └── photo2.png ├── src │ ├── main.go │ ├── secret.key │ └── utils.go ├── tmp │ ├── cache.db │ └── tempfile.tmp └── vendor └── github.com ├── lib1.go └── lib2.go And we assume the backup command is:\n$ plakar at /var/backups backup -ignore-file ./excludes.txt /var/files You can use -ignore multiple times with different patterns, or use -ignore-file with a file containing one pattern per line. The result is the same.\nIgnore the /var/files/demo/vendor directory only: # /var/files/demo/vendor Ignore the node_modules directory, wherever it is in the tree: # node_modules In this case, both /var/files/demo/node_modules and /var/files/demo/src/node_modules would be ignored.\nIgnore the file .git/config, wherever it is in the tree: # **/.git/config Here, the double asterisk ** is required.\nWhen a path pattern contains multiple parts, it is evaluated relative to the root directory /.\nExclude all files located in a tmp directory anywhere in the tree, except for cache.db: # **/tmp/* !**/tmp/cache.db Exclude everything, except .pdf and .docx files: # * !**/*.pdf !**/*.docx ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/guides/excluding-files-from-a-backup/","section":"Docs","summary":"Learn how to exclude files from a backup in Plakar","title":"Excluding files from a backup","type":"docs"},{"content":" Excluding files from a backup # This guide shows how to exclude files and directories from a backup using ignore patterns.\nWhy you\u0026rsquo;d need to exclude files from a backup # When backing up a source directory, not all files are worth preserving. Some are large and easily regenerated (build artifacts, dependency directories like node_modules or vendor), some are temporary (cache files, lock files, log files), and some are sensitive and should not be stored in a backup repository (secrets, local environment files).\nExcluding these from your backups reduces storage usage, speeds up backup and restore operations, and keeps your snapshots focused on only the important files.\nThe plakar backup command supports the -ignore and -ignore-file options to exclude files from a backup. These options use patterns with a syntax similar to .gitignore files.\nExamples # For the examples below, we assume the following directory structure in /var/files/demo:\n/var/files/demo ├── .cache │ └── index.db ├── .config ├── .env ├── .env.local ├── .git │ ├── config │ └── hooks ├── build │ ├── app.bin │ └── app.o ├── config │ ├── config.local.yaml │ └── config.yaml ├── Documents │ ├── Invoices │ │ ├── invoice1.pdf │ │ └── invoice2.pdf │ └── Reports │ ├── report1.docx │ └── report2.docx ├── logs │ ├── app.log │ └── error.log ├── node_modules │ ├── module1.js │ └── module2.js ├── Pictures │ ├── Family │ │ └── photo1.jpg │ └── Vacation │ └── photo2.png ├── src │ ├── main.go │ ├── secret.key │ └── utils.go ├── tmp │ ├── cache.db │ └── tempfile.tmp └── vendor └── github.com ├── lib1.go └── lib2.go And we assume the backup command is:\n$ plakar at /var/backups backup -ignore-file ./excludes.txt /var/files You can use -ignore multiple times with different patterns, or use -ignore-file with a file containing one pattern per line. The result is the same.\nIgnore the /var/files/demo/vendor directory only: # /var/files/demo/vendor Ignore the node_modules directory, wherever it is in the tree: # node_modules In this case, both /var/files/demo/node_modules and /var/files/demo/src/node_modules would be ignored.\nIgnore the file .git/config, wherever it is in the tree: # **/.git/config Here, the double asterisk ** is required.\nWhen a path pattern contains multiple parts, it is evaluated relative to the root directory /.\nExclude all files located in a tmp directory anywhere in the tree, except for cache.db: # **/tmp/* !**/tmp/cache.db Exclude everything, except .pdf and .docx files: # * !**/*.pdf !**/*.docx ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/excluding-files-from-a-backup/","section":"Docs","summary":"Learn how to exclude files from a backup in Plakar","title":"Excluding files from a backup","type":"docs"},{"content":" Excluding files from a backup # This guide shows how to exclude files and directories from a backup using ignore patterns.\nWhy you\u0026rsquo;d need to exclude files from a backup # When backing up a source directory, not all files are worth preserving. Some are large and easily regenerated (build artifacts, dependency directories like node_modules or vendor), some are temporary (cache files, lock files, log files), and some are sensitive and should not be stored in a backup repository (secrets, local environment files).\nExcluding these from your backups reduces storage usage, speeds up backup and restore operations, and keeps your snapshots focused on only the important files.\nThe plakar backup command supports the -ignore and -ignore-file options to exclude files from a backup. These options use patterns with a syntax similar to .gitignore files.\nExamples # For the examples below, we assume the following directory structure in /var/files/demo:\n/var/files/demo ├── .cache │ └── index.db ├── .config ├── .env ├── .env.local ├── .git │ ├── config │ └── hooks ├── build │ ├── app.bin │ └── app.o ├── config │ ├── config.local.yaml │ └── config.yaml ├── Documents │ ├── Invoices │ │ ├── invoice1.pdf │ │ └── invoice2.pdf │ └── Reports │ ├── report1.docx │ └── report2.docx ├── logs │ ├── app.log │ └── error.log ├── node_modules │ ├── module1.js │ └── module2.js ├── Pictures │ ├── Family │ │ └── photo1.jpg │ └── Vacation │ └── photo2.png ├── src │ ├── main.go │ ├── secret.key │ └── utils.go ├── tmp │ ├── cache.db │ └── tempfile.tmp └── vendor └── github.com ├── lib1.go └── lib2.go And we assume the backup command is:\n$ plakar at /var/backups backup -ignore-file ./excludes.txt /var/files You can use -ignore multiple times with different patterns, or use -ignore-file with a file containing one pattern per line. The result is the same.\nIgnore the /var/files/demo/vendor directory only: # /var/files/demo/vendor Ignore the node_modules directory, wherever it is in the tree: # node_modules In this case, both /var/files/demo/node_modules and /var/files/demo/src/node_modules would be ignored.\nIgnore the file .git/config, wherever it is in the tree: # **/.git/config Here, the double asterisk ** is required.\nWhen a path pattern contains multiple parts, it is evaluated relative to the root directory /.\nExclude all files located in a tmp directory anywhere in the tree, except for cache.db: # **/tmp/* !**/tmp/cache.db Exclude everything, except .pdf and .docx files: # * !**/*.pdf !**/*.docx ","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/excluding-files-from-a-backup/","section":"Docs","summary":"Learn how to exclude files from a backup in Plakar","title":"Excluding files from a backup","type":"docs"},{"content":" Backup non-filesystem data # Modern infrastructures are not limited to files stored on traditional filesystems. Your data may reside in various services, databases, or cloud storage solutions.\nIn the first two parts of this quickstart, we created a Kloset Store and performed a backup of local filesystem data, and then synchronized that Kloset Store to a second location to improve durability.\nIn this guide, we will create a backup of an S3 bucket using Plakar. The same logic applies to any other data source supported by Plakar through its various connectors.\nRequirements # After following the previous parts of this quickstart, you should have:\nPlakar is installed on your system (see the installation guide). A Kloset Store on your local filesystem at $HOME/backups. A S3-compatible storage location configured in your Plakar configuration file under the name s3-backups (see Part 2 of this quickstart). Initialize the S3 bucket with some data # Before we can back up an S3 bucket, we need to have one with some data in it. If you already have an S3 bucket you want to back up, you can skip this step.\nIf, instead, you followed the previous part of this quickstart and set up a local MinIO instance, you can use it to create a test bucket.\nOpen your browser and navigate to http://localhost:9001. Log in with the default credentials minioadmin / minioadmin.\nClick on the \u0026ldquo;Create bucket\u0026rdquo; button, and enter mydata as the bucket name.\nThen, click on the \u0026ldquo;Upload\u0026rdquo; button, and upload a few files of your choice to the bucket.\nConfigure the S3 source in plakar # Similarly to how we configured the S3 store in Part 2 of this quickstart, we need to let Plakar know about the S3 source we want to back up.\nRun the following command to create the new source:\n$ plakar source add mydata \\ location=s3://localhost:9000/mydata \\ access_key=minioadmin \\ secret_access_key=minioadmin \\ use_tls=false This command creates a new source named mydata that points to the mydata bucket on the MinIO server running at localhost:9000. It uses the access key and secret key provided above. The use_tls=false option is specified because we are connecting to a local server without TLS.\nuse_tls should be omitted or set to true when connecting to production S3-compatible services that use TLS.\nCreate the backup # To create a backup of the S3 bucket to the local Kloset Store at $HOME/backups, run the following command:\nplakar at $HOME/backups backup \u0026#34;@mydata\u0026#34; As you can see, the alias @mydata is used to reference the source previously configured.\nTo verify that the backup was created successfully, you can list the snapshots in the local Kloset Store:\n$ plakar at $HOME/backups ls 2025-12-16T12:55:30Z 842de8b1 496 B 0s / # the backup of the S3 bucket we just created 2025-12-15T21:09:32Z 772fba5f 2.9 MiB 0s /private/etc # the previous backup, from Part 1 Note that in this example, we created the backup to a store hosted on the local filesystem. It is perfectly possible to back up S3 data directly to another S3 location, or any other supported store, using plakar at \u0026quot;@store-name\u0026quot; backup \u0026quot;@source-name\u0026quot;.\nRestore the backup # It is also possible to restore a snapshot directly to an S3 location.\nTo do so, first configure a new destination:\n$ plakar destination add mydata \\ location=s3://localhost:9000/mydata \\ access_key=minioadmin \\ secret_access_key=minioadmin \\ use_tls=false use_tls should be omitted or set to true when connecting to production S3-compatible services that use TLS.\nAnd then, restore the snapshot to that destination:\n$ plakar at $HOME/backups restore -to \u0026#34;@mydata\u0026#34; 842de8b1 repository passphrase: info: 842de8b1: OK ✓ / info: 842de8b1: OK ✓ /Makefile info: restore: restoration of 842de8b1:/ at @mydata completed successfully For the restore command, we used the alias again with @mydata which references the S3 destination we just configured.\nCongratulations! # You have successfully created a backup of an S3 bucket using Plakar, and restored it back to the S3 location.\nThis guide demonstrated how to back up non-filesystem data using Plakar. The same principles apply to any other data source supported by Plakar through its various connectors.\nNext steps # There is plenty more to discover about Plakar. Here are our suggestions on what to try next:\nLearn more about the core concepts behind Plakar. Create a schedule for your backups Discover more about the Plakar command line syntax ","date":"11 March 2026","externalUrl":null,"permalink":"/docs/main/quickstart/backup-non-filesystem-data/","section":"Docs","summary":"Create a backup for your non-filesystem data. In this guide, we will back up an S3 bucket but this logic applies to any connector supported by plakar.","title":"Backup non-filesystem data","type":"docs"},{"content":" Backup non-filesystem data # Modern infrastructures are not limited to files stored on traditional filesystems. Your data may reside in various services, databases, or cloud storage solutions.\nIn the first two parts of this quickstart, we created a Kloset Store and performed a backup of local filesystem data, and then synchronized that Kloset Store to a second location to improve durability.\nIn this guide, we will create a backup of an S3 bucket using Plakar. The same logic applies to any other data source supported by Plakar through its various connectors.\nRequirements # After following the previous parts of this quickstart, you should have:\nPlakar is installed on your system (see the installation guide). A Kloset Store on your local filesystem at $HOME/backups. A S3-compatible storage location configured in your Plakar configuration file under the name s3-backups (see Part 2 of this quickstart). Initialize the S3 bucket with some data # Before we can back up an S3 bucket, we need to have one with some data in it. If you already have an S3 bucket you want to back up, you can skip this step.\nIf, instead, you followed the previous part of this quickstart and set up a local MinIO instance, you can use it to create a test bucket.\nOpen your browser and navigate to http://localhost:9001. Log in with the default credentials minioadmin / minioadmin.\nClick on the \u0026ldquo;Create bucket\u0026rdquo; button, and enter mydata as the bucket name.\nThen, click on the \u0026ldquo;Upload\u0026rdquo; button, and upload a few files of your choice to the bucket.\nConfigure the S3 source in plakar # Similarly to how we configured the S3 store in Part 2 of this quickstart, we need to let Plakar know about the S3 source we want to back up.\nRun the following command to create the new source:\n$ plakar source add mydata \\ location=s3://localhost:9000/mydata \\ access_key=minioadmin \\ secret_access_key=minioadmin \\ use_tls=false This command creates a new source named mydata that points to the mydata bucket on the MinIO server running at localhost:9000. It uses the access key and secret key provided above. The use_tls=false option is specified because we are connecting to a local server without TLS.\nuse_tls should be omitted or set to true when connecting to production S3-compatible services that use TLS.\nCreate the backup # To create a backup of the S3 bucket to the local Kloset Store at $HOME/backups, run the following command:\nplakar at $HOME/backups backup \u0026#34;@mydata\u0026#34; As you can see, the alias @mydata is used to reference the source previously configured.\nTo verify that the backup was created successfully, you can list the snapshots in the local Kloset Store:\n$ plakar at $HOME/backups ls 2025-12-16T12:55:30Z 842de8b1 496 B 0s / # the backup of the S3 bucket we just created 2025-12-15T21:09:32Z 772fba5f 2.9 MiB 0s /private/etc # the previous backup, from Part 1 Note that in this example, we created the backup to a store hosted on the local filesystem. It is perfectly possible to back up S3 data directly to another S3 location, or any other supported store, using plakar at \u0026quot;@store-name\u0026quot; backup \u0026quot;@source-name\u0026quot;.\nRestore the backup # It is also possible to restore a snapshot directly to an S3 location.\nTo do so, first configure a new destination:\n$ plakar destination add mydata \\ location=s3://localhost:9000/mydata \\ access_key=minioadmin \\ secret_access_key=minioadmin \\ use_tls=false use_tls should be omitted or set to true when connecting to production S3-compatible services that use TLS.\nAnd then, restore the snapshot to that destination:\n$ plakar at $HOME/backups restore -to \u0026#34;@mydata\u0026#34; 842de8b1 repository passphrase: info: 842de8b1: OK ✓ / info: 842de8b1: OK ✓ /Makefile info: restore: restoration of 842de8b1:/ at @mydata completed successfully For the restore command, we used the alias again with @mydata which references the S3 destination we just configured.\nCongratulations! # You have successfully created a backup of an S3 bucket using Plakar, and restored it back to the S3 location.\nThis guide demonstrated how to back up non-filesystem data using Plakar. The same principles apply to any other data source supported by Plakar through its various connectors.\nNext steps # There is plenty more to discover about Plakar. Here are our suggestions on what to try next:\nLearn more about the core concepts behind Plakar. Create a schedule for your backups Discover more about the Plakar command line syntax ","date":"11 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/quickstart/backup-non-filesystem-data/","section":"Docs","summary":"Create a backup for your non-filesystem data. In this guide, we will back up an S3 bucket but this logic applies to any connector supported by plakar.","title":"Backup non-filesystem data","type":"docs"},{"content":" Backup non-filesystem data # Modern infrastructures are not limited to files stored on traditional filesystems. Your data may reside in various services, databases, or cloud storage solutions.\nIn the first two parts of this quickstart, we created a Kloset Store and performed a backup of local filesystem data, and then synchronized that Kloset Store to a second location to improve durability.\nIn this guide, we will create a backup of an S3 bucket using Plakar. The same logic applies to any other data source supported by Plakar through its various connectors.\nRequirements # After following the previous parts of this quickstart, you should have:\nPlakar is installed on your system (see the installation guide). A Kloset Store on your local filesystem at $HOME/backups. A S3-compatible storage location configured in your Plakar configuration file under the name s3-backups (see Part 2 of this quickstart). Initialize the S3 bucket with some data # Before we can back up an S3 bucket, we need to have one with some data in it. If you already have an S3 bucket you want to back up, you can skip this step.\nIf, instead, you followed the previous part of this quickstart and set up a local MinIO instance, you can use it to create a test bucket.\nOpen your browser and navigate to http://localhost:9001. Log in with the default credentials minioadmin / minioadmin.\nClick on the \u0026ldquo;Create bucket\u0026rdquo; button, and enter mydata as the bucket name.\nThen, click on the \u0026ldquo;Upload\u0026rdquo; button, and upload a few files of your choice to the bucket.\nConfigure the S3 source in plakar # Similarly to how we configured the S3 store in Part 2 of this quickstart, we need to let Plakar know about the S3 source we want to back up.\nRun the following command to create the new source:\n$ plakar source add mydata \\ location=s3://localhost:9000/mydata \\ access_key=minioadmin \\ secret_access_key=minioadmin \\ use_tls=false This command creates a new source named mydata that points to the mydata bucket on the MinIO server running at localhost:9000. It uses the access key and secret key provided above. The use_tls=false option is specified because we are connecting to a local server without TLS.\nuse_tls should be omitted or set to true when connecting to production S3-compatible services that use TLS.\nCreate the backup # To create a backup of the S3 bucket to the local Kloset Store at $HOME/backups, run the following command:\nplakar at $HOME/backups backup \u0026#34;@mydata\u0026#34; As you can see, the alias @mydata is used to reference the source previously configured.\nTo verify that the backup was created successfully, you can list the snapshots in the local Kloset Store:\n$ plakar at $HOME/backups ls 2025-12-16T12:55:30Z 842de8b1 496 B 0s / # the backup of the S3 bucket we just created 2025-12-15T21:09:32Z 772fba5f 2.9 MiB 0s /private/etc # the previous backup, from Part 1 Note that in this example, we created the backup to a store hosted on the local filesystem. It is perfectly possible to back up S3 data directly to another S3 location, or any other supported store, using plakar at \u0026quot;@store-name\u0026quot; backup \u0026quot;@source-name\u0026quot;.\nRestore the backup # It is also possible to restore a snapshot directly to an S3 location.\nTo do so, first configure a new destination:\n$ plakar destination add mydata \\ location=s3://localhost:9000/mydata \\ access_key=minioadmin \\ secret_access_key=minioadmin \\ use_tls=false use_tls should be omitted or set to true when connecting to production S3-compatible services that use TLS.\nAnd then, restore the snapshot to that destination:\n$ plakar at $HOME/backups restore -to \u0026#34;@mydata\u0026#34; 842de8b1 repository passphrase: info: 842de8b1: OK ✓ / info: 842de8b1: OK ✓ /Makefile info: restore: restoration of 842de8b1:/ at @mydata completed successfully For the restore command, we used the alias again with @mydata which references the S3 destination we just configured.\nCongratulations! # You have successfully created a backup of an S3 bucket using Plakar, and restored it back to the S3 location.\nThis guide demonstrated how to back up non-filesystem data using Plakar. The same principles apply to any other data source supported by Plakar through its various connectors.\nNext steps # There is plenty more to discover about Plakar. Here are our suggestions on what to try next:\nLearn more about the core concepts behind Plakar. Create a schedule for your backups Discover more about the Plakar command line syntax ","date":"11 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/quickstart/backup-non-filesystem-data/","section":"Docs","summary":"Create a backup for your non-filesystem data. In this guide, we will back up an S3 bucket but this logic applies to any connector supported by plakar.","title":"Backup non-filesystem data","type":"docs"},{"content":" Community # The Plakar community drives the evolution of a new open-source standard for data protection. This documentation page gives you a practical overview of how to join, participate, and stay informed.\nLooking for the public facing community page with visuals and calls to action? Visit the dedicated portal: Community Portal.\nQuick Access # Main Community Page: community Code of Conduct: Code of Conduct Contributing Guide: Contributing Guide Contact \u0026amp; Social Channels # Choose the channel that matches your intent:\nDiscord (real-time chat \u0026amp; open work sessions) Reddit (asynchronous discussions \u0026amp; feedback) X (announcements \u0026amp; progress) Bluesky (announcements \u0026amp; progress) LinkedIn (announcements \u0026amp; progress) GitHub (issues, code, roadmap) Open Collaboration Spaces # Daily Briefings \u0026amp; Hackrooms: Run live in Discord voice channels. Weekly Community Calls: Agenda posted in Discord announcements. Early Preview Builds: Shared via GitHub release candidates and Discord #releases channel. Code of Conduct \u0026amp; Governance # Respectful, inclusive collaboration is required in all spaces.\nCode of Conduct: Code of Conduct Reporting: conduct@plakar.io (confidential) Contributing Guide: Contributing Guide Key principles (summary):\nBe respectful and constructive. Focus on technical merit. No harassment or discrimination. Use clear, transparent communication. Read the full documents on GitHub for enforcement scope and process.\nContributing to Plakar # Thank you for your interest in contributing to Plakar! We welcome contributions of all kinds, including new features, bug fixes, documentation improvements, and more. Please take a moment to read through this guide to understand our contribution process and how to get started.\nHow to Contribute # 1. Reporting Bugs and Issues\nIf you\u0026rsquo;ve found a bug or have a suggestion for improvement, please open an issue on GitHub. Include as much detail as possible, such as:\nA description of the problem or suggestion. Steps to reproduce the issue (if applicable). Relevant logs or error messages. Any other context that would help us understand the problem. 2. Suggesting Features\nWe are always looking for ways to make Plakar better! If you have a feature request, feel free to open an issue with a detailed explanation of the proposed feature, its use cases, and potential benefits.\n3. Submitting Changes\nBefore starting any work, it\u0026rsquo;s a good idea to discuss your idea with the maintainers by opening an issue or commenting on an existing one. This helps ensure your work aligns with project goals and saves time for everyone involved.\nSteps to Submit Changes:\nFork the Repository\nCreate a personal fork of the repository on GitHub to work in. Create a Feature Branch\nUse descriptive names for your branches, such as fix-bug-issue123 or feature-new-backup-strategy. Write Clear, Concise Commit Messages\nEach commit message should clearly describe what change was made and why. Use the present tense, e.g., \u0026ldquo;Fix issue with backup scheduler.\u0026rdquo; Follow the Coding Style\nAdhere to the project’s code style and formatting.\nEnsure your code is clear, maintainable, and well-documented. Run Tests and Linters\nMake sure your code passes all tests and follows the required linting rules.\nIf applicable, add new tests to verify your changes. Submit a Pull Request (PR)\nOnce your changes are complete and tested, open a PR against the main branch. Provide a detailed description of your changes, referencing any related issues. 4. Code Review Process\nAll PRs will be reviewed by project maintainers. Feedback may be provided, and changes might be requested. Please be open to discussions and willing to make adjustments based on the review.\nRespond Promptly: Address review comments promptly to keep the PR moving forward. Stay Technical: Reviews will be focused on technical merit and alignment with project goals. 5. Documentation Contributions\nGood documentation is crucial for any project! Contributions to documentation are highly valued. You can:\nUpdate or improve existing documentation. Add new documentation for features, setup instructions, or developer guides. Ensure that all code contributions include relevant documentation updates. 6. Code Cleanup and Maintenance\nCode cleanup, refactoring, and removing unused code are essential contributions that help keep the project healthy. Do not hesitate to submit PRs that address these issues even if they do not add new features.\n7. Licensing and Dependency Guidelines\nAll contributions must comply with the project\u0026rsquo;s license.\nBe cautious when introducing new dependencies. Avoid dependencies with viral licensing (e.g., GPL) unless discussed and approved by the maintainers.\nEnsure any new dependencies are well-maintained and have a compatible license.\n8. Contributor Code of Conduct\nAll contributors must follow the project\u0026rsquo;s Code of Conduct to ensure a welcoming and respectful environment for everyone.\nGetting Help\nIf you need help or have questions, feel free to reach out by:\nOpening an issue on GitHub. Asking on our mailing list or chat channels. Reaching out to the maintainers directly. We appreciate your time and effort in making Plakar better. Happy coding!\nWhat Belongs Where? # Goal: Recommended Place: Ask a quick question Discord Propose a feature GitHub Issue Report a bug GitHub Issue Share community content Discord #general / Reddit Follow announcements X / Bluesky / LinkedIn Learn engagement rules Code of Conduct (GitHub) Need Help? # If unsure where to start:\nJoin Discord and say hello. Open an issue labeled question. Email: hello@plakar.io (general) or conduct@plakar.io (conduct related). Thank you for helping grow the Plakar ecosystem.\n","date":"20 March 2026","externalUrl":null,"permalink":"/docs/main/community/","section":"Docs","summary":"How to engage with the Plakar community: where to talk, collaborate, follow updates, and read the rules.","title":"Community","type":"docs"},{"content":" Community # The Plakar community drives the evolution of a new open-source standard for data protection. This documentation page gives you a practical overview of how to join, participate, and stay informed.\nLooking for the public facing community page with visuals and calls to action? Visit the dedicated portal: Community Portal.\nQuick Access # Main Community Page: community Code of Conduct: Code of Conduct Contributing Guide: Contributing Guide Contact \u0026amp; Social Channels # Choose the channel that matches your intent:\nDiscord (real-time chat \u0026amp; open work sessions) Reddit (asynchronous discussions \u0026amp; feedback) X (announcements \u0026amp; progress) Bluesky (announcements \u0026amp; progress) LinkedIn (announcements \u0026amp; progress) GitHub (issues, code, roadmap) Open Collaboration Spaces # Daily Briefings \u0026amp; Hackrooms: Run live in Discord voice channels. Weekly Community Calls: Agenda posted in Discord announcements. Early Preview Builds: Shared via GitHub release candidates and Discord #releases channel. Code of Conduct \u0026amp; Governance # Respectful, inclusive collaboration is required in all spaces.\nCode of Conduct: Code of Conduct Reporting: conduct@plakar.io (confidential) Contributing Guide: Contributing Guide Key principles (summary):\nBe respectful and constructive. Focus on technical merit. No harassment or discrimination. Use clear, transparent communication. Read the full documents on GitHub for enforcement scope and process.\nContributing to Plakar # Thank you for your interest in contributing to Plakar! We welcome contributions of all kinds, including new features, bug fixes, documentation improvements, and more. Please take a moment to read through this guide to understand our contribution process and how to get started.\nHow to Contribute # 1. Reporting Bugs and Issues\nIf you\u0026rsquo;ve found a bug or have a suggestion for improvement, please open an issue on GitHub. Include as much detail as possible, such as:\nA description of the problem or suggestion. Steps to reproduce the issue (if applicable). Relevant logs or error messages. Any other context that would help us understand the problem. 2. Suggesting Features\nWe are always looking for ways to make Plakar better! If you have a feature request, feel free to open an issue with a detailed explanation of the proposed feature, its use cases, and potential benefits.\n3. Submitting Changes\nBefore starting any work, it\u0026rsquo;s a good idea to discuss your idea with the maintainers by opening an issue or commenting on an existing one. This helps ensure your work aligns with project goals and saves time for everyone involved.\nSteps to Submit Changes:\nFork the Repository\nCreate a personal fork of the repository on GitHub to work in. Create a Feature Branch\nUse descriptive names for your branches, such as fix-bug-issue123 or feature-new-backup-strategy. Write Clear, Concise Commit Messages\nEach commit message should clearly describe what change was made and why. Use the present tense, e.g., \u0026ldquo;Fix issue with backup scheduler.\u0026rdquo; Follow the Coding Style\nAdhere to the project’s code style and formatting.\nEnsure your code is clear, maintainable, and well-documented. Run Tests and Linters\nMake sure your code passes all tests and follows the required linting rules.\nIf applicable, add new tests to verify your changes. Submit a Pull Request (PR)\nOnce your changes are complete and tested, open a PR against the main branch. Provide a detailed description of your changes, referencing any related issues. 4. Code Review Process\nAll PRs will be reviewed by project maintainers. Feedback may be provided, and changes might be requested. Please be open to discussions and willing to make adjustments based on the review.\nRespond Promptly: Address review comments promptly to keep the PR moving forward. Stay Technical: Reviews will be focused on technical merit and alignment with project goals. 5. Documentation Contributions\nGood documentation is crucial for any project! Contributions to documentation are highly valued. You can:\nUpdate or improve existing documentation. Add new documentation for features, setup instructions, or developer guides. Ensure that all code contributions include relevant documentation updates. 6. Code Cleanup and Maintenance\nCode cleanup, refactoring, and removing unused code are essential contributions that help keep the project healthy. Do not hesitate to submit PRs that address these issues even if they do not add new features.\n7. Licensing and Dependency Guidelines\nAll contributions must comply with the project\u0026rsquo;s license.\nBe cautious when introducing new dependencies. Avoid dependencies with viral licensing (e.g., GPL) unless discussed and approved by the maintainers.\nEnsure any new dependencies are well-maintained and have a compatible license.\n8. Contributor Code of Conduct\nAll contributors must follow the project\u0026rsquo;s Code of Conduct to ensure a welcoming and respectful environment for everyone.\nGetting Help\nIf you need help or have questions, feel free to reach out by:\nOpening an issue on GitHub. Asking on our mailing list or chat channels. Reaching out to the maintainers directly. We appreciate your time and effort in making Plakar better. Happy coding!\nWhat Belongs Where? # Goal: Recommended Place: Ask a quick question Discord Propose a feature GitHub Issue Report a bug GitHub Issue Share community content Discord #general / Reddit Follow announcements X / Bluesky / LinkedIn Learn engagement rules Code of Conduct (GitHub) Need Help? # If unsure where to start:\nJoin Discord and say hello. Open an issue labeled question. Email: hello@plakar.io (general) or conduct@plakar.io (conduct related). Thank you for helping grow the Plakar ecosystem.\n","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/community/","section":"Docs","summary":"How to engage with the Plakar community: where to talk, collaborate, follow updates, and read the rules.","title":"Community","type":"docs"},{"content":" Community # The Plakar community drives the evolution of a new open-source standard for data protection. This documentation page gives you a practical overview of how to join, participate, and stay informed.\nLooking for the public facing community page with visuals and calls to action? Visit the dedicated portal: Community Portal.\nQuick Access # Main Community Page: community Code of Conduct: Code of Conduct Contributing Guide: Contributing Guide Contact \u0026amp; Social Channels # Choose the channel that matches your intent:\nDiscord (real-time chat \u0026amp; open work sessions) Reddit (asynchronous discussions \u0026amp; feedback) X (announcements \u0026amp; progress) Bluesky (announcements \u0026amp; progress) LinkedIn (announcements \u0026amp; progress) GitHub (issues, code, roadmap) Open Collaboration Spaces # Daily Briefings \u0026amp; Hackrooms: Run live in Discord voice channels. Weekly Community Calls: Agenda posted in Discord announcements. Early Preview Builds: Shared via GitHub release candidates and Discord #releases channel. Code of Conduct \u0026amp; Governance # Respectful, inclusive collaboration is required in all spaces.\nCode of Conduct: Code of Conduct Reporting: conduct@plakar.io (confidential) Contributing Guide: Contributing Guide Key principles (summary):\nBe respectful and constructive. Focus on technical merit. No harassment or discrimination. Use clear, transparent communication. Read the full documents on GitHub for enforcement scope and process.\nContributing to Plakar # Thank you for your interest in contributing to Plakar! We welcome contributions of all kinds, including new features, bug fixes, documentation improvements, and more. Please take a moment to read through this guide to understand our contribution process and how to get started.\nHow to Contribute # 1. Reporting Bugs and Issues\nIf you\u0026rsquo;ve found a bug or have a suggestion for improvement, please open an issue on GitHub. Include as much detail as possible, such as:\nA description of the problem or suggestion. Steps to reproduce the issue (if applicable). Relevant logs or error messages. Any other context that would help us understand the problem. 2. Suggesting Features\nWe are always looking for ways to make Plakar better! If you have a feature request, feel free to open an issue with a detailed explanation of the proposed feature, its use cases, and potential benefits.\n3. Submitting Changes\nBefore starting any work, it\u0026rsquo;s a good idea to discuss your idea with the maintainers by opening an issue or commenting on an existing one. This helps ensure your work aligns with project goals and saves time for everyone involved.\nSteps to Submit Changes:\nFork the Repository\nCreate a personal fork of the repository on GitHub to work in. Create a Feature Branch\nUse descriptive names for your branches, such as fix-bug-issue123 or feature-new-backup-strategy. Write Clear, Concise Commit Messages\nEach commit message should clearly describe what change was made and why. Use the present tense, e.g., \u0026ldquo;Fix issue with backup scheduler.\u0026rdquo; Follow the Coding Style\nAdhere to the project’s code style and formatting.\nEnsure your code is clear, maintainable, and well-documented. Run Tests and Linters\nMake sure your code passes all tests and follows the required linting rules.\nIf applicable, add new tests to verify your changes. Submit a Pull Request (PR)\nOnce your changes are complete and tested, open a PR against the main branch. Provide a detailed description of your changes, referencing any related issues. 4. Code Review Process\nAll PRs will be reviewed by project maintainers. Feedback may be provided, and changes might be requested. Please be open to discussions and willing to make adjustments based on the review.\nRespond Promptly: Address review comments promptly to keep the PR moving forward. Stay Technical: Reviews will be focused on technical merit and alignment with project goals. 5. Documentation Contributions\nGood documentation is crucial for any project! Contributions to documentation are highly valued. You can:\nUpdate or improve existing documentation. Add new documentation for features, setup instructions, or developer guides. Ensure that all code contributions include relevant documentation updates. 6. Code Cleanup and Maintenance\nCode cleanup, refactoring, and removing unused code are essential contributions that help keep the project healthy. Do not hesitate to submit PRs that address these issues even if they do not add new features.\n7. Licensing and Dependency Guidelines\nAll contributions must comply with the project\u0026rsquo;s license.\nBe cautious when introducing new dependencies. Avoid dependencies with viral licensing (e.g., GPL) unless discussed and approved by the maintainers.\nEnsure any new dependencies are well-maintained and have a compatible license.\n8. Contributor Code of Conduct\nAll contributors must follow the project\u0026rsquo;s Code of Conduct to ensure a welcoming and respectful environment for everyone.\nGetting Help\nIf you need help or have questions, feel free to reach out by:\nOpening an issue on GitHub. Asking on our mailing list or chat channels. Reaching out to the maintainers directly. We appreciate your time and effort in making Plakar better. Happy coding!\nWhat Belongs Where? # Goal: Recommended Place: Ask a quick question Discord Propose a feature GitHub Issue Report a bug GitHub Issue Share community content Discord #general / Reddit Follow announcements X / Bluesky / LinkedIn Learn engagement rules Code of Conduct (GitHub) Need Help? # If unsure where to start:\nJoin Discord and say hello. Open an issue labeled question. Email: hello@plakar.io (general) or conduct@plakar.io (conduct related). Thank you for helping grow the Plakar ecosystem.\n","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/community/","section":"Docs","summary":"How to engage with the Plakar community: where to talk, collaborate, follow updates, and read the rules.","title":"Community","type":"docs"},{"content":" Community # The Plakar community drives the evolution of a new open-source standard for data protection. This documentation page gives you a practical overview of how to join, participate, and stay informed.\nLooking for the public facing community page with visuals and calls to action? Visit the dedicated portal: Community Portal.\nQuick Access # Main Community Page: community Code of Conduct: Code of Conduct Contributing Guide: Contributing Guide Contact \u0026amp; Social Channels # Choose the channel that matches your intent:\nDiscord (real-time chat \u0026amp; open work sessions) Reddit (asynchronous discussions \u0026amp; feedback) X (announcements \u0026amp; progress) Bluesky (announcements \u0026amp; progress) LinkedIn (announcements \u0026amp; progress) GitHub (issues, code, roadmap) Open Collaboration Spaces # Daily Briefings \u0026amp; Hackrooms: Run live in Discord voice channels. Weekly Community Calls: Agenda posted in Discord announcements. Early Preview Builds: Shared via GitHub release candidates and Discord #releases channel. Code of Conduct \u0026amp; Governance # Respectful, inclusive collaboration is required in all spaces.\nCode of Conduct: Code of Conduct Reporting: conduct@plakar.io (confidential) Contributing Guide: Contributing Guide Key principles (summary):\nBe respectful and constructive. Focus on technical merit. No harassment or discrimination. Use clear, transparent communication. Read the full documents on GitHub for enforcement scope and process.\nContributing to Plakar # Thank you for your interest in contributing to Plakar! We welcome contributions of all kinds, including new features, bug fixes, documentation improvements, and more. Please take a moment to read through this guide to understand our contribution process and how to get started.\nHow to Contribute # 1. Reporting Bugs and Issues\nIf you\u0026rsquo;ve found a bug or have a suggestion for improvement, please open an issue on GitHub. Include as much detail as possible, such as:\nA description of the problem or suggestion. Steps to reproduce the issue (if applicable). Relevant logs or error messages. Any other context that would help us understand the problem. 2. Suggesting Features\nWe are always looking for ways to make Plakar better! If you have a feature request, feel free to open an issue with a detailed explanation of the proposed feature, its use cases, and potential benefits.\n3. Submitting Changes\nBefore starting any work, it\u0026rsquo;s a good idea to discuss your idea with the maintainers by opening an issue or commenting on an existing one. This helps ensure your work aligns with project goals and saves time for everyone involved.\nSteps to Submit Changes:\nFork the Repository\nCreate a personal fork of the repository on GitHub to work in. Create a Feature Branch\nUse descriptive names for your branches, such as fix-bug-issue123 or feature-new-backup-strategy. Write Clear, Concise Commit Messages\nEach commit message should clearly describe what change was made and why. Use the present tense, e.g., \u0026ldquo;Fix issue with backup scheduler.\u0026rdquo; Follow the Coding Style\nAdhere to the project’s code style and formatting.\nEnsure your code is clear, maintainable, and well-documented. Run Tests and Linters\nMake sure your code passes all tests and follows the required linting rules.\nIf applicable, add new tests to verify your changes. Submit a Pull Request (PR)\nOnce your changes are complete and tested, open a PR against the main branch. Provide a detailed description of your changes, referencing any related issues. 4. Code Review Process\nAll PRs will be reviewed by project maintainers. Feedback may be provided, and changes might be requested. Please be open to discussions and willing to make adjustments based on the review.\nRespond Promptly: Address review comments promptly to keep the PR moving forward. Stay Technical: Reviews will be focused on technical merit and alignment with project goals. 5. Documentation Contributions\nGood documentation is crucial for any project! Contributions to documentation are highly valued. You can:\nUpdate or improve existing documentation. Add new documentation for features, setup instructions, or developer guides. Ensure that all code contributions include relevant documentation updates. 6. Code Cleanup and Maintenance\nCode cleanup, refactoring, and removing unused code are essential contributions that help keep the project healthy. Do not hesitate to submit PRs that address these issues even if they do not add new features.\n7. Licensing and Dependency Guidelines\nAll contributions must comply with the project\u0026rsquo;s license.\nBe cautious when introducing new dependencies. Avoid dependencies with viral licensing (e.g., GPL) unless discussed and approved by the maintainers.\nEnsure any new dependencies are well-maintained and have a compatible license.\n8. Contributor Code of Conduct\nAll contributors must follow the project\u0026rsquo;s Code of Conduct to ensure a welcoming and respectful environment for everyone.\nGetting Help\nIf you need help or have questions, feel free to reach out by:\nOpening an issue on GitHub. Asking on our mailing list or chat channels. Reaching out to the maintainers directly. We appreciate your time and effort in making Plakar better. Happy coding!\nWhat Belongs Where? # Goal: Recommended Place: Ask a quick question Discord Propose a feature GitHub Issue Report a bug GitHub Issue Share community content Discord #general / Reddit Follow announcements X / Bluesky / LinkedIn Learn engagement rules Code of Conduct (GitHub) Need Help? # If unsure where to start:\nJoin Discord and say hello. Open an issue labeled question. Email: hello@plakar.io (general) or conduct@plakar.io (conduct related). Thank you for helping grow the Plakar ecosystem.\n","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/community/","section":"Docs","summary":"How to engage with the Plakar community: where to talk, collaborate, follow updates, and read the rules.","title":"Community","type":"docs"},{"content":" Koofr # The Koofr integration package for Plakar allows you to back up and restore data to and from Koofr, as well as host Kloset stores directly within Koofr. It is built on top of Rclone, a command-line tool for managing cloud storage backends.\nThe integration provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Koofr remote must be configured. Typical use cases\nCold backup of Koofr folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Koofr, install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with Koofr:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Koofr.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Koofr\u0026rdquo; from the list of supported storage providers. Enter your email and your app password. If you don\u0026rsquo;t have an app password, generate one in your Koofr account settings. Confirm the settings. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Koofr files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Koofr.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Koofr via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nSee also # Rclone documentation for Koofr ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/main/integrations/koofr/","section":"Docs","summary":"Back up and restore your Koofr with Plakar, and host Kloset stores in Koofr.","title":"Koofr","type":"docs"},{"content":" Koofr # The Koofr integration package for Plakar allows you to back up and restore data to and from Koofr, as well as host Kloset stores directly within Koofr. It is built on top of Rclone, a command-line tool for managing cloud storage backends.\nThe integration provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Koofr remote must be configured. Typical use cases\nCold backup of Koofr folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Koofr, install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with Koofr:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Koofr.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Koofr\u0026rdquo; from the list of supported storage providers. Enter your email and your app password. If you don\u0026rsquo;t have an app password, generate one in your Koofr account settings. Confirm the settings. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Koofr files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Koofr.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Koofr via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nSee also # Rclone documentation for Koofr ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/integrations/koofr/","section":"Docs","summary":"Back up and restore your Koofr with Plakar, and host Kloset stores in Koofr.","title":"Koofr","type":"docs"},{"content":" Koofr # The Koofr integration package for Plakar allows you to back up and restore data to and from Koofr, as well as host Kloset stores directly within Koofr. It is built on top of Rclone, a command-line tool for managing cloud storage backends.\nThe integration provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Koofr remote must be configured. Typical use cases\nCold backup of Koofr folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Koofr, install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with Koofr:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Koofr.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Koofr\u0026rdquo; from the list of supported storage providers. Enter your email and your app password. If you don\u0026rsquo;t have an app password, generate one in your Koofr account settings. Confirm the settings. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Koofr files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Koofr.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Koofr via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nSee also # Rclone documentation for Koofr ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/integrations/koofr/","section":"Docs","summary":"Back up and restore your Koofr with Plakar, and host Kloset stores in Koofr.","title":"Koofr","type":"docs"},{"content":" Koofr # The Koofr integration package for Plakar allows you to back up and restore data to and from Koofr, as well as host Kloset stores directly within Koofr. It is built on top of Rclone, a command-line tool for managing cloud storage backends.\nThe integration provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Koofr remote must be configured. Typical use cases\nCold backup of Koofr folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Koofr, install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with Koofr:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Koofr.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Koofr\u0026rdquo; from the list of supported storage providers. Enter your email and your app password. If you don\u0026rsquo;t have an app password, generate one in your Koofr account settings. Confirm the settings. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Koofr files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Koofr.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Koofr via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nSee also # Rclone documentation for Koofr ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/integrations/koofr/","section":"Docs","summary":"Back up and restore your Koofr with Plakar, and host Kloset stores in Koofr.","title":"Koofr","type":"docs"},{"content":" How Maintenance Works # Plakar uses chunking and deduplication to store backups efficiently. Multiple snapshots share data, so only what has actually changed gets written to the store. Because of this, removing a snapshot is a two-step process: plakar rm marks it as deleted, and plakar maintenance does the actual cleanup. This page explains how that works.\nHow data is stored # Chunking and deduplication # When a backup runs, Plakar does not store files as-is. Instead, it splits the incoming data stream into variable-size pieces called chunks, using a Content-Defined Chunking (CDC) algorithm. Before writing a chunk, Plakar checks whether it already exists in the store. If it does, it is reused rather than written again.\nThis is what makes deduplication work across snapshots. Two backups of the same directory, taken a day apart, will share the vast majority of their chunks. Only the chunks corresponding to files that actually changed will be new.\nFor a deeper look at CDC and the library Plakar uses to implement it, see the go-cdc-chunkers blog post.\nPackfiles # Chunks are not written to the store one by one. For performance reasons, Plakar groups many chunks together into larger containers called packfiles, targeting roughly 64 MB each. This reduces the number of objects written to the store and makes storage and network operations significantly more efficient.\nWhen possible, Plakar tries to keep chunks belonging to the same file in the same packfile. This limits fragmentation and makes restores faster.\nThe diagram below shows how two snapshots can share chunks inside the same packfile. Chunk 2 and Chunk 3 are referenced by both Snapshot A and Snapshot B, but exist only once in the store.\nflowchart LR subgraph Snapshots S1[\"Snapshot A\"] S2[\"Snapshot B\"] end subgraph P1[\"Packfile 1\"] C1[\"Chunk 1\"] C2[\"Chunk 2 ◄── shared\"] C3[\"Chunk 3 ◄── shared\"] end subgraph P2[\"Packfile 2\"] C4[\"Chunk 4\"] end S1 --\u003e C1 S1 --\u003e C2 S1 --\u003e C3 S2 --\u003e C2 S2 --\u003e C3 S2 --\u003e C4 Why deleting a snapshot does not free space immediately # Because chunks are shared across snapshots, Plakar cannot simply delete a chunk when a snapshot is removed. Another snapshot might still be referencing it.\nWhen you run plakar rm, Plakar records a new state where that snapshot no longer exists. The store reflects this immediately, the snapshot is gone from listings and cannot be restored. But the underlying chunks and packfiles remain untouched until maintenance determines whether they are still needed.\nThe maintenance process # Running plakar maintenance is what actually reclaims storage. During a maintenance run, Plakar scans the store and identifies chunks that are no longer referenced by any snapshot. Those chunks are marked as candidates for deletion and held for a grace period before removal.\nMaintenance can be automated using the Plakar scheduler. See Scheduling tasks for details.\nThe grace period # Marked chunks remain in the store for a grace period, currently set to 7 days. On the next maintenance run after that window, chunks that are still unreferenced become eligible for removal. If a chunk has since been referenced again by a new backup, the mark is removed and the chunk is kept.\nThis delay exists to protect backups that are currently in progress. A long-running backup might write chunks that appear unreferenced to maintenance, because the snapshot that will reference them has not been committed yet.\nflowchart LR A[\"plakar rm (snapshot removed)\"] --\u003e M[\"plakar maintenance scans the store\"] M --\u003e B[\"Unreferenced chunks marked for deletion\"] B --\u003e C{\"Grace period elapsed?\"} C -- No --\u003e D[\"Chunks retained\"] C -- Yes --\u003e E{\"Still unreferenced?\"} E -- No --\u003e D E -- Yes --\u003e F[\"Eligible for removal\"] F --\u003e G[\"Packfile GC\"] Garbage collection at the packfile level # Maintenance does not remove chunks directly, it operates at the packfile level. A packfile can only be removed if every chunk it contains is unreferenced. If even one chunk inside a packfile is still needed, the whole packfile stays.\nThis means some unreferenced chunks may remain stored beyond the grace period if they share a packfile with chunks that are still active. This is a known tradeoff of the current design.\nCheckpoints and long-running backups # A potential problem arises with backups that run longer than the grace period. Maintenance could mark chunks as unreferenced while a backup is still in progress and has not yet committed its snapshot. If those chunks were deleted, the backup would fail or produce a corrupt snapshot.\nPlakar prevents this through a checkpoint mechanism: the state of an in-progress backup is recorded every hour. Maintenance accounts for these checkpoints and treats any chunks referenced by an active checkpoint as still in use, regardless of whether a snapshot has been committed yet.\nRecompaction # The current maintenance model can only garbage-collect packfiles that are entirely unreferenced. Partially unused packfiles cannot be compacted. Consider the following scenario:\nA large file is backed up; its chunks are written into a packfile. Months later, the file changes significantly. New chunks are written to new packfiles. The original packfile now contains a mix of still-referenced chunks (unchanged parts of the file) and unreferenced ones (parts that changed). That packfile cannot be removed, because it is not fully unreferenced. The unused chunks inside it remain in the store.\nA future recompaction feature (currently under development) will address this by merging underutilized packfiles and regrouping related chunks, making it possible to reclaim space from packfiles that are mostly stale.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/explanations/how-maintenance-works/","section":"Docs","summary":"Understand how Plakar stores backup data in chunks and packfiles, why deleting a snapshot does not immediately free space, and how the maintenance process safely reclaims unused storage.","title":"How Maintenance Works","type":"docs"},{"content":" How Maintenance Works # Plakar uses chunking and deduplication to store backups efficiently. Multiple snapshots share data, so only what has actually changed gets written to the store. Because of this, removing a snapshot is a two-step process: plakar rm marks it as deleted, and plakar maintenance does the actual cleanup. This page explains how that works.\nHow data is stored # Chunking and deduplication # When a backup runs, Plakar does not store files as-is. Instead, it splits the incoming data stream into variable-size pieces called chunks, using a Content-Defined Chunking (CDC) algorithm. Before writing a chunk, Plakar checks whether it already exists in the store. If it does, it is reused rather than written again.\nThis is what makes deduplication work across snapshots. Two backups of the same directory, taken a day apart, will share the vast majority of their chunks. Only the chunks corresponding to files that actually changed will be new.\nFor a deeper look at CDC and the library Plakar uses to implement it, see the go-cdc-chunkers blog post.\nPackfiles # Chunks are not written to the store one by one. For performance reasons, Plakar groups many chunks together into larger containers called packfiles, targeting roughly 64 MB each. This reduces the number of objects written to the store and makes storage and network operations significantly more efficient.\nWhen possible, Plakar tries to keep chunks belonging to the same file in the same packfile. This limits fragmentation and makes restores faster.\nThe diagram below shows how two snapshots can share chunks inside the same packfile. Chunk 2 and Chunk 3 are referenced by both Snapshot A and Snapshot B, but exist only once in the store.\nflowchart LR subgraph Snapshots S1[\"Snapshot A\"] S2[\"Snapshot B\"] end subgraph P1[\"Packfile 1\"] C1[\"Chunk 1\"] C2[\"Chunk 2 ◄── shared\"] C3[\"Chunk 3 ◄── shared\"] end subgraph P2[\"Packfile 2\"] C4[\"Chunk 4\"] end S1 --\u003e C1 S1 --\u003e C2 S1 --\u003e C3 S2 --\u003e C2 S2 --\u003e C3 S2 --\u003e C4 Why deleting a snapshot does not free space immediately # Because chunks are shared across snapshots, Plakar cannot simply delete a chunk when a snapshot is removed. Another snapshot might still be referencing it.\nWhen you run plakar rm, Plakar records a new state where that snapshot no longer exists. The store reflects this immediately, the snapshot is gone from listings and cannot be restored. But the underlying chunks and packfiles remain untouched until maintenance determines whether they are still needed.\nThe maintenance process # Running plakar maintenance is what actually reclaims storage. During a maintenance run, Plakar scans the store and identifies chunks that are no longer referenced by any snapshot. Those chunks are marked as candidates for deletion and held for a grace period before removal.\nMaintenance can be automated using the Plakar scheduler. See Scheduling tasks for details.\nThe grace period # Marked chunks remain in the store for a grace period, currently set to 30 days. On the next maintenance run after that window, chunks that are still unreferenced become eligible for removal. If a chunk has since been referenced again by a new backup, the mark is removed and the chunk is kept.\nThis delay exists to protect backups that are currently in progress. A long-running backup might write chunks that appear unreferenced to maintenance, because the snapshot that will reference them has not been committed yet.\nflowchart LR A[\"plakar rm (snapshot removed)\"] --\u003e M[\"plakar maintenance scans the store\"] M --\u003e B[\"Unreferenced chunks marked for deletion\"] B --\u003e C{\"Grace period elapsed?\"} C -- No --\u003e D[\"Chunks retained\"] C -- Yes --\u003e E{\"Still unreferenced?\"} E -- No --\u003e D E -- Yes --\u003e F[\"Eligible for removal\"] F --\u003e G[\"Packfile GC\"] Garbage collection at the packfile level # Maintenance does not remove chunks directly, it operates at the packfile level. A packfile can only be removed if every chunk it contains is unreferenced. If even one chunk inside a packfile is still needed, the whole packfile stays.\nThis means some unreferenced chunks may remain stored beyond the grace period if they share a packfile with chunks that are still active. This is a known tradeoff of the current design.\nCheckpoints and long-running backups # A potential problem arises with backups that run longer than the grace period. Maintenance could mark chunks as unreferenced while a backup is still in progress and has not yet committed its snapshot. If those chunks were deleted, the backup would fail or produce a corrupt snapshot.\nPlakar prevents this through a checkpoint mechanism: the state of an in-progress backup is recorded every hour. Maintenance accounts for these checkpoints and treats any chunks referenced by an active checkpoint as still in use, regardless of whether a snapshot has been committed yet.\nRecompaction # The current maintenance model can only garbage-collect packfiles that are entirely unreferenced. Partially unused packfiles cannot be compacted. Consider the following scenario:\nA large file is backed up; its chunks are written into a packfile. Months later, the file changes significantly. New chunks are written to new packfiles. The original packfile now contains a mix of still-referenced chunks (unchanged parts of the file) and unreferenced ones (parts that changed). That packfile cannot be removed, because it is not fully unreferenced. The unused chunks inside it remain in the store.\nA future recompaction feature (currently under development) will address this by merging underutilized packfiles and regrouping related chunks, making it possible to reclaim space from packfiles that are mostly stale.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/explanations/how-maintenance-works/","section":"Docs","summary":"Understand how Plakar stores backup data in chunks and packfiles, why deleting a snapshot does not immediately free space, and how the maintenance process safely reclaims unused storage.","title":"How Maintenance Works","type":"docs"},{"content":" How Maintenance Works # Plakar uses chunking and deduplication to store backups efficiently. Multiple snapshots share data, so only what has actually changed gets written to the store. Because of this, removing a snapshot is a two-step process: plakar rm marks it as deleted, and plakar maintenance does the actual cleanup. This page explains how that works.\nHow data is stored # Chunking and deduplication # When a backup runs, Plakar does not store files as-is. Instead, it splits the incoming data stream into variable-size pieces called chunks, using a Content-Defined Chunking (CDC) algorithm. Before writing a chunk, Plakar checks whether it already exists in the store. If it does, it is reused rather than written again.\nThis is what makes deduplication work across snapshots. Two backups of the same directory, taken a day apart, will share the vast majority of their chunks. Only the chunks corresponding to files that actually changed will be new.\nFor a deeper look at CDC and the library Plakar uses to implement it, see the go-cdc-chunkers blog post.\nPackfiles # Chunks are not written to the store one by one. For performance reasons, Plakar groups many chunks together into larger containers called packfiles, targeting roughly 64 MB each. This reduces the number of objects written to the store and makes storage and network operations significantly more efficient.\nWhen possible, Plakar tries to keep chunks belonging to the same file in the same packfile. This limits fragmentation and makes restores faster.\nThe diagram below shows how two snapshots can share chunks inside the same packfile. Chunk 2 and Chunk 3 are referenced by both Snapshot A and Snapshot B, but exist only once in the store.\nflowchart LR subgraph Snapshots S1[\"Snapshot A\"] S2[\"Snapshot B\"] end subgraph P1[\"Packfile 1\"] C1[\"Chunk 1\"] C2[\"Chunk 2 ◄── shared\"] C3[\"Chunk 3 ◄── shared\"] end subgraph P2[\"Packfile 2\"] C4[\"Chunk 4\"] end S1 --\u003e C1 S1 --\u003e C2 S1 --\u003e C3 S2 --\u003e C2 S2 --\u003e C3 S2 --\u003e C4 Why deleting a snapshot does not free space immediately # Because chunks are shared across snapshots, Plakar cannot simply delete a chunk when a snapshot is removed. Another snapshot might still be referencing it.\nWhen you run plakar rm, Plakar records a new state where that snapshot no longer exists. The store reflects this immediately, the snapshot is gone from listings and cannot be restored. But the underlying chunks and packfiles remain untouched until maintenance determines whether they are still needed.\nThe maintenance process # Running plakar maintenance is what actually reclaims storage. During a maintenance run, Plakar scans the store and identifies chunks that are no longer referenced by any snapshot. Those chunks are marked as candidates for deletion and held for a grace period before removal.\nMaintenance can be automated using the Plakar scheduler. See Scheduling tasks for details.\nThe grace period # Marked chunks remain in the store for a grace period, currently set to 30 days. On the next maintenance run after that window, chunks that are still unreferenced become eligible for removal. If a chunk has since been referenced again by a new backup, the mark is removed and the chunk is kept.\nThis delay exists to protect backups that are currently in progress. A long-running backup might write chunks that appear unreferenced to maintenance, because the snapshot that will reference them has not been committed yet.\nflowchart LR A[\"plakar rm (snapshot removed)\"] --\u003e M[\"plakar maintenance scans the store\"] M --\u003e B[\"Unreferenced chunks marked for deletion\"] B --\u003e C{\"Grace period elapsed?\"} C -- No --\u003e D[\"Chunks retained\"] C -- Yes --\u003e E{\"Still unreferenced?\"} E -- No --\u003e D E -- Yes --\u003e F[\"Eligible for removal\"] F --\u003e G[\"Packfile GC\"] Garbage collection at the packfile level # Maintenance does not remove chunks directly, it operates at the packfile level. A packfile can only be removed if every chunk it contains is unreferenced. If even one chunk inside a packfile is still needed, the whole packfile stays.\nThis means some unreferenced chunks may remain stored beyond the grace period if they share a packfile with chunks that are still active. This is a known tradeoff of the current design.\nCheckpoints and long-running backups # A potential problem arises with backups that run longer than the grace period. Maintenance could mark chunks as unreferenced while a backup is still in progress and has not yet committed its snapshot. If those chunks were deleted, the backup would fail or produce a corrupt snapshot.\nPlakar prevents this through a checkpoint mechanism: the state of an in-progress backup is recorded every hour. Maintenance accounts for these checkpoints and treats any chunks referenced by an active checkpoint as still in use, regardless of whether a snapshot has been committed yet.\nRecompaction # The current maintenance model can only garbage-collect packfiles that are entirely unreferenced. Partially unused packfiles cannot be compacted. Consider the following scenario:\nA large file is backed up; its chunks are written into a packfile. Months later, the file changes significantly. New chunks are written to new packfiles. The original packfile now contains a mix of still-referenced chunks (unchanged parts of the file) and unreferenced ones (parts that changed). That packfile cannot be removed, because it is not fully unreferenced. The unused chunks inside it remain in the store.\nA future recompaction feature (currently under development) will address this by merging underutilized packfiles and regrouping related chunks, making it possible to reclaim space from packfiles that are mostly stale.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/explanations/how-maintenance-works/","section":"Docs","summary":"Understand how Plakar stores backup data in chunks and packfiles, why deleting a snapshot does not immediately free space, and how the maintenance process safely reclaims unused storage.","title":"How Maintenance Works","type":"docs"},{"content":" How Maintenance Works # Plakar uses chunking and deduplication to store backups efficiently. Multiple snapshots share data, so only what has actually changed gets written to the store. Because of this, removing a snapshot is a two-step process: plakar rm marks it as deleted, and plakar maintenance does the actual cleanup. This page explains how that works.\nHow data is stored # Chunking and deduplication # When a backup runs, Plakar does not store files as-is. Instead, it splits the incoming data stream into variable-size pieces called chunks, using a Content-Defined Chunking (CDC) algorithm. Before writing a chunk, Plakar checks whether it already exists in the store. If it does, it is reused rather than written again.\nThis is what makes deduplication work across snapshots. Two backups of the same directory, taken a day apart, will share the vast majority of their chunks. Only the chunks corresponding to files that actually changed will be new.\nFor a deeper look at CDC and the library Plakar uses to implement it, see the go-cdc-chunkers blog post.\nPackfiles # Chunks are not written to the store one by one. For performance reasons, Plakar groups many chunks together into larger containers called packfiles, targeting roughly 64 MB each. This reduces the number of objects written to the store and makes storage and network operations significantly more efficient.\nWhen possible, Plakar tries to keep chunks belonging to the same file in the same packfile. This limits fragmentation and makes restores faster.\nThe diagram below shows how two snapshots can share chunks inside the same packfile. Chunk 2 and Chunk 3 are referenced by both Snapshot A and Snapshot B, but exist only once in the store.\nflowchart LR subgraph Snapshots S1[\"Snapshot A\"] S2[\"Snapshot B\"] end subgraph P1[\"Packfile 1\"] C1[\"Chunk 1\"] C2[\"Chunk 2 ◄── shared\"] C3[\"Chunk 3 ◄── shared\"] end subgraph P2[\"Packfile 2\"] C4[\"Chunk 4\"] end S1 --\u003e C1 S1 --\u003e C2 S1 --\u003e C3 S2 --\u003e C2 S2 --\u003e C3 S2 --\u003e C4 Why deleting a snapshot does not free space immediately # Because chunks are shared across snapshots, Plakar cannot simply delete a chunk when a snapshot is removed. Another snapshot might still be referencing it.\nWhen you run plakar rm, Plakar records a new state where that snapshot no longer exists. The store reflects this immediately, the snapshot is gone from listings and cannot be restored. But the underlying chunks and packfiles remain untouched until maintenance determines whether they are still needed.\nThe maintenance process # Running plakar maintenance is what actually reclaims storage. During a maintenance run, Plakar scans the store and identifies chunks that are no longer referenced by any snapshot. Those chunks are marked as candidates for deletion and held for a grace period before removal.\nMaintenance can be automated using the Plakar scheduler. See Scheduling tasks for details.\nThe grace period # Marked chunks remain in the store for a grace period, currently set to 7 days. On the next maintenance run after that window, chunks that are still unreferenced become eligible for removal. If a chunk has since been referenced again by a new backup, the mark is removed and the chunk is kept.\nThis delay exists to protect backups that are currently in progress. A long-running backup might write chunks that appear unreferenced to maintenance, because the snapshot that will reference them has not been committed yet.\nflowchart LR A[\"plakar rm (snapshot removed)\"] --\u003e M[\"plakar maintenance scans the store\"] M --\u003e B[\"Unreferenced chunks marked for deletion\"] B --\u003e C{\"Grace period elapsed?\"} C -- No --\u003e D[\"Chunks retained\"] C -- Yes --\u003e E{\"Still unreferenced?\"} E -- No --\u003e D E -- Yes --\u003e F[\"Eligible for removal\"] F --\u003e G[\"Packfile GC\"] Garbage collection at the packfile level # Maintenance does not remove chunks directly, it operates at the packfile level. A packfile can only be removed if every chunk it contains is unreferenced. If even one chunk inside a packfile is still needed, the whole packfile stays.\nThis means some unreferenced chunks may remain stored beyond the grace period if they share a packfile with chunks that are still active. This is a known tradeoff of the current design.\nCheckpoints and long-running backups # A potential problem arises with backups that run longer than the grace period. Maintenance could mark chunks as unreferenced while a backup is still in progress and has not yet committed its snapshot. If those chunks were deleted, the backup would fail or produce a corrupt snapshot.\nPlakar prevents this through a checkpoint mechanism: the state of an in-progress backup is recorded every hour. Maintenance accounts for these checkpoints and treats any chunks referenced by an active checkpoint as still in use, regardless of whether a snapshot has been committed yet.\nRecompaction # The current maintenance model can only garbage-collect packfiles that are entirely unreferenced. Partially unused packfiles cannot be compacted. Consider the following scenario:\nA large file is backed up; its chunks are written into a packfile. Months later, the file changes significantly. New chunks are written to new packfiles. The original packfile now contains a mix of still-referenced chunks (unchanged parts of the file) and unreferenced ones (parts that changed). That packfile cannot be removed, because it is not fully unreferenced. The unused chunks inside it remain in the store.\nA future recompaction feature (currently under development) will address this by merging underutilized packfiles and regrouping related chunks, making it possible to reclaim space from packfiles that are mostly stale.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/explanations/how-maintenance-works/","section":"Docs","summary":"Understand how Plakar stores backup data in chunks and packfiles, why deleting a snapshot does not immediately free space, and how the maintenance process safely reclaims unused storage.","title":"How Maintenance Works","type":"docs"},{"content":" Retrieving secrets via external command # Plakar can retrieve a Kloset Store passphrase by executing an external command. The command must write the passphrase to standard output. This lets you integrate password managers or secret stores instead of keeping the passphrase in plain text in the Plakar configuration.\nWhy you\u0026rsquo;d use an external command to retrieve passphrases # By default, Plakar prompts for the store passphrase on every command where an action is done to the store. You can avoid this by storing it in the configuration, but that keeps it in plain text on disk.\nFor better security, you can delegate passphrase retrieval to an external secret manager such as 1Password, gopass, or HashiCorp Vault so the passphrase is never stored in plain text and access can be audited or revoked through the secret manager itself.\nSetting the command # Pass passphrase_cmd when adding the store:\n$ plakar store add mystore \\ location=/var/backups \\ passphrase_cmd=\u0026#39;gopass show mystore/passphrase\u0026#39; Or update an existing store:\n$ plakar store set mystore passphrase_cmd=\u0026#39;gopass show mystore/passphrase\u0026#39; When you access the store, Plakar executes the command, reads its stdout, and uses the result as the passphrase:\n$ plakar at \u0026#34;@mystore\u0026#34; ls Examples # gopass # $ passphrase_cmd=\u0026#39;gopass show mystore/passphrase\u0026#39; 1Password CLI # $ passphrase_cmd=\u0026#39;op read \u0026#34;op://Personal/mystore/password\u0026#34;\u0026#39; HashiCorp Vault # $ passphrase_cmd=\u0026#39;vault kv get -field=password secret/mystore\u0026#39; Limitation # The only hard requirement is that the command must not read from stdin. Plakar does not connect a terminal to the command\u0026rsquo;s stdin, so anything that attempts to read from it will fail. System-level prompts (biometrics, OS dialogs, GUI windows) are fine as long as they do not need input typed into the terminal.\nThe command must write only the passphrase to stdout. Any extra output will be treated as part of the passphrase.\nWhat\u0026rsquo;s coming # External command resolution is currently limited to the passphrase. Work is underway to extend this to other configuration fields such as storage credentials and tokens.\n","date":"16 March 2026","externalUrl":null,"permalink":"/docs/main/guides/retrieve-passphrase-kloset-store/","section":"Docs","summary":"The passphrase for accessing an encrypted Kloset Store can be stored in the environment, a file, or in the configuration. It can also be retrieved via an external command, for example your password manager.","title":"Retrieving secrets via external command","type":"docs"},{"content":" Retrieving secrets via external command # Plakar can retrieve a Kloset Store passphrase by executing an external command. The command must write the passphrase to standard output. This lets you integrate password managers or secret stores instead of keeping the passphrase in plain text in the Plakar configuration.\nWhy you\u0026rsquo;d use an external command to retrieve passphrases # By default, Plakar prompts for the store passphrase on every command where an action is done to the store. You can avoid this by storing it in the configuration, but that keeps it in plain text on disk.\nFor better security, you can delegate passphrase retrieval to an external secret manager such as 1Password, gopass, or HashiCorp Vault so the passphrase is never stored in plain text and access can be audited or revoked through the secret manager itself.\nSetting the command # Pass passphrase_cmd when adding the store:\n$ plakar store add mystore \\ location=/var/backups \\ passphrase_cmd=\u0026#39;gopass show mystore/passphrase\u0026#39; Or update an existing store:\n$ plakar store set mystore passphrase_cmd=\u0026#39;gopass show mystore/passphrase\u0026#39; When you access the store, Plakar executes the command, reads its stdout, and uses the result as the passphrase:\n$ plakar at \u0026#34;@mystore\u0026#34; ls Examples # gopass # $ passphrase_cmd=\u0026#39;gopass show mystore/passphrase\u0026#39; 1Password CLI # $ passphrase_cmd=\u0026#39;op read \u0026#34;op://Personal/mystore/password\u0026#34;\u0026#39; HashiCorp Vault # $ passphrase_cmd=\u0026#39;vault kv get -field=password secret/mystore\u0026#39; Limitation # The only hard requirement is that the command must not read from stdin. Plakar does not connect a terminal to the command\u0026rsquo;s stdin, so anything that attempts to read from it will fail. System-level prompts (biometrics, OS dialogs, GUI windows) are fine as long as they do not need input typed into the terminal.\nThe command must write only the passphrase to stdout. Any extra output will be treated as part of the passphrase.\nWhat\u0026rsquo;s coming # External command resolution is currently limited to the passphrase. Work is underway to extend this to other configuration fields such as storage credentials and tokens.\n","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/guides/retrieve-passphrase-kloset-store/","section":"Docs","summary":"The passphrase for accessing an encrypted Kloset Store can be stored in the environment, a file, or in the configuration. It can also be retrieved via an external command, for example your password manager.","title":"Retrieving secrets via external command","type":"docs"},{"content":" Retrieving secrets via external command # Plakar can retrieve a Kloset Store passphrase by executing an external command. The command must write the passphrase to standard output. This lets you integrate password managers or secret stores instead of keeping the passphrase in plain text in the Plakar configuration.\nWhy you\u0026rsquo;d use an external command to retrieve passphrases # By default, Plakar prompts for the store passphrase on every command where an action is done to the store. You can avoid this by storing it in the configuration, but that keeps it in plain text on disk.\nFor better security, you can delegate passphrase retrieval to an external secret manager such as 1Password, gopass, or HashiCorp Vault so the passphrase is never stored in plain text and access can be audited or revoked through the secret manager itself.\nSetting the command # Pass passphrase_cmd when adding the store:\n$ plakar store add mystore \\ location=/var/backups \\ passphrase_cmd=\u0026#39;gopass show mystore/passphrase\u0026#39; Or update an existing store:\n$ plakar store set mystore passphrase_cmd=\u0026#39;gopass show mystore/passphrase\u0026#39; When you access the store, Plakar executes the command, reads its stdout, and uses the result as the passphrase:\n$ plakar at \u0026#34;@mystore\u0026#34; ls Examples # gopass # $ passphrase_cmd=\u0026#39;gopass show mystore/passphrase\u0026#39; 1Password CLI # $ passphrase_cmd=\u0026#39;op read \u0026#34;op://Personal/mystore/password\u0026#34;\u0026#39; HashiCorp Vault # $ passphrase_cmd=\u0026#39;vault kv get -field=password secret/mystore\u0026#39; Limitation # The only hard requirement is that the command must not read from stdin. Plakar does not connect a terminal to the command\u0026rsquo;s stdin, so anything that attempts to read from it will fail. System-level prompts (biometrics, OS dialogs, GUI windows) are fine as long as they do not need input typed into the terminal.\nThe command must write only the passphrase to stdout. Any extra output will be treated as part of the passphrase.\nWhat\u0026rsquo;s coming # External command resolution is currently limited to the passphrase. Work is underway to extend this to other configuration fields such as storage credentials and tokens.\n","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/retrieve-passphrase-kloset-store/","section":"Docs","summary":"The passphrase for accessing an encrypted Kloset Store can be stored in the environment, a file, or in the configuration. It can also be retrieved via an external command, for example your password manager.","title":"Retrieving secrets via external command","type":"docs"},{"content":" Retrieving secrets via external command # Plakar can retrieve a Kloset Store passphrase by executing an external command. The command must write the passphrase to standard output. This lets you integrate password managers or secret stores instead of keeping the passphrase in plain text in the Plakar configuration.\nWhy you\u0026rsquo;d use an external command to retrieve passphrases # By default, Plakar prompts for the store passphrase on every command where an action is done to the store. You can avoid this by storing it in the configuration, but that keeps it in plain text on disk.\nFor better security, you can delegate passphrase retrieval to an external secret manager such as 1Password, gopass, or HashiCorp Vault so the passphrase is never stored in plain text and access can be audited or revoked through the secret manager itself.\nSetting the command # Pass passphrase_cmd when adding the store:\n$ plakar store add mystore \\ location=/var/backups \\ passphrase_cmd=\u0026#39;gopass show mystore/passphrase\u0026#39; Or update an existing store:\n$ plakar store set mystore passphrase_cmd=\u0026#39;gopass show mystore/passphrase\u0026#39; When you access the store, Plakar executes the command, reads its stdout, and uses the result as the passphrase:\n$ plakar at \u0026#34;@mystore\u0026#34; ls Examples # gopass # $ passphrase_cmd=\u0026#39;gopass show mystore/passphrase\u0026#39; 1Password CLI # $ passphrase_cmd=\u0026#39;op read \u0026#34;op://Personal/mystore/password\u0026#34;\u0026#39; HashiCorp Vault # $ passphrase_cmd=\u0026#39;vault kv get -field=password secret/mystore\u0026#39; Limitation # The only hard requirement is that the command must not read from stdin. Plakar does not connect a terminal to the command\u0026rsquo;s stdin, so anything that attempts to read from it will fail. System-level prompts (biometrics, OS dialogs, GUI windows) are fine as long as they do not need input typed into the terminal.\nThe command must write only the passphrase to stdout. Any extra output will be treated as part of the passphrase.\nWhat\u0026rsquo;s coming # External command resolution is currently limited to the passphrase. Work is underway to extend this to other configuration fields such as storage credentials and tokens.\n","date":"16 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/retrieve-passphrase-kloset-store/","section":"Docs","summary":"The passphrase for accessing an encrypted Kloset Store can be stored in the environment, a file, or in the configuration. It can also be retrieved via an external command, for example your password manager.","title":"Retrieving secrets via external command","type":"docs"},{"content":" Creating a custom connector # This guide shows how to create a custom Plakar Importer connector in Go, build it, package it, and install it using the plakar CLI.\nWhy write a custom connector? # Plakar ships with connectors for common sources and storage backends. When you need to back up something that isn\u0026rsquo;t supported out of the box such as an internal database thats not commonly used or a custom data source, you can write your own connector in Go and install it like any other package.\nWhat you will build # A minimal Importer connector that backs up a single hardcoded file. This is the simplest possible integration — once you understand the pattern, you can extend it to walk directories, read from APIs, or consume any other data source.\nPrerequisites # Go 1.21 or later plakar installed and available in your $PATH 1. Set up the project # Create a new Go module for your plugin:\nmkdir plakar-myimporter cd plakar-myimporter go mod init github.com/yourorg/plakar-myimporter Install the two required dependencies:\ngo get github.com/PlakarKorp/kloset go get github.com/PlakarKorp/go-kloset-sdk Create the project structure:\nplakar-myimporter/ ├── connector.go ├── importer/ │ └── main.go ├── manifest.yaml ├── Makefile ├── go.mod └── go.sum 2. Implement the connector # Create connector.go:\npackage connector import ( \u0026#34;context\u0026#34; \u0026#34;io\u0026#34; \u0026#34;os\u0026#34; \u0026#34;path/filepath\u0026#34; \u0026#34;github.com/PlakarKorp/kloset/connectors\u0026#34; \u0026#34;github.com/PlakarKorp/kloset/connectors/importer\u0026#34; \u0026#34;github.com/PlakarKorp/kloset/location\u0026#34; \u0026#34;github.com/PlakarKorp/kloset/objects\u0026#34; ) const FILE = \u0026#34;/home/user/Documents/notes.md\u0026#34; func init() { importer.Register(\u0026#34;test\u0026#34;, location.FLAG_LOCALFS, NewImporter) } type testConnector struct{} func NewImporter(ctx context.Context, opts *connectors.Options, proto string, config map[string]string) (importer.Importer, error) { return \u0026amp;testConnector{}, nil } func (f *testConnector) Root() string { return filepath.Dir(FILE) } func (f *testConnector) Origin() string { return \u0026#34;localhost\u0026#34; } func (f *testConnector) Type() string { return \u0026#34;test\u0026#34; } func (f *testConnector) Flags() location.Flags { return location.FLAG_LOCALFS } func (f *testConnector) Ping(_ context.Context) error { return nil } func (f *testConnector) Close(_ context.Context) error { return nil } func (f *testConnector) Import(ctx context.Context, records chan\u0026lt;- *connectors.Record, results \u0026lt;-chan *connectors.Result) error { defer close(records) info, err := os.Stat(FILE) if err != nil { return err } fi := objects.FileInfo{ Lname: filepath.Base(FILE), Lsize: info.Size(), Lmode: info.Mode(), LmodTime: info.ModTime(), Ldev: 1, } records \u0026lt;- connectors.NewRecord(FILE, \u0026#34;\u0026#34;, fi, nil, func() (io.ReadCloser, error) { return os.Open(FILE) }) return nil } Writing to console Never write to os.Stdout. Plakar communicates with the plugin over gRPC through stdin/stdout — any writes there corrupt the stream. Use os.Stderr for debug output instead.\n3. Create the entrypoint # Create importer/main.go:\npackage main import ( \u0026#34;os\u0026#34; sdk \u0026#34;github.com/PlakarKorp/go-kloset-sdk\u0026#34; connector \u0026#34;github.com/yourorg/plakar-myimporter\u0026#34; ) func main() { sdk.EntrypointImporter(os.Args, connector.NewImporter) } 4. Write the manifest # Create manifest.yaml:\nname: test display_name: Test description: A minimal importer connector that backs up a single file. homepage: https://github.com/yourorg/plakar-myimporter license: ISC api_version: v1.1.0 version: v0.1.0 tier: third-party contact: mailto:you@example.com tags: [filesystem] connectors: - type: importer executable: test-importer protocols: [test] location_flags: [localfs] class: filesystem subclass: test validator: ./importer/schema.json args: [] extra_files: [] Not all fields are required for every integration. tags is optional metadata used for discovery. Under each connector, validator is only needed if your connector accepts a configuration schema; args and extra_files can be omitted entirely if you have no additional arguments to pass to the executable or no supplementary files to bundle. A minimal connector entry needs only type, executable and protocols.\nThe executable value must match the binary name you produce in the build step. The location_flags list must reflect the location.Flags returned by your connector\u0026rsquo;s Flags() method. Set class and subclass to values that best describe your data source — for a connector that reads from a local filesystem path, filesystem and your protocol name are appropriate choices.\n5. Build the plugin # Create a Makefile:\nbuild: go build -o test-importer ./importer Then build:\nmake build 6. Package and install # Package the plugin into a .ptar file:\nplakar pkg create Install it:\nplakar pkg add test-v0.1.0.ptar Verify the installation:\nplakar pkg show You should see test listed.\n7. Use the connector # Back up using your new importer:\nplakar at /var/backups backup test:// Because this connector uses a hardcoded file path, the location after test:// is ignored — the importer always reads from /home/user/Documents/notes.md.\nNext steps # Walking a directory — instead of a hardcoded path, parse the location from the config map (strings.TrimPrefix(config[\u0026quot;location\u0026quot;], proto+\u0026quot;://\u0026quot;)) and use filepath.WalkDir to send a record for each file.\nRemote sources — for connectors that talk to an API, use 0 as the flags value instead of location.FLAG_LOCALFS, and parse credentials and endpoints from the config map passed to your constructor.\nStreaming imports — if your source cannot be replayed (e.g. reading from a pipe or tarball), add location.FLAG_STREAM to your flags. Plakar will disable the progress bar and call Import only once.\nAdding an Exporter or Storage backend — implement the Exporter or Store interface, register it in init(), add a corresponding entrypoint directory, and add an entry to manifest.yaml.\nSee the SDK reference and the integration example repository for the full interface definitions and a complete working implementation.\n","date":"16 April 2026","externalUrl":null,"permalink":"/docs/main/guides/creating-a-custom-connector/","section":"Docs","summary":"Step-by-step guide to implement and install your own Plakar connector (importer) in Go.","title":"Creating a custom connector","type":"docs"},{"content":" Creating a custom connector # This guide shows how to create a custom Plakar Importer connector in Go, build it, package it, and install it using the plakar CLI.\nWhy write a custom connector? # Plakar ships with connectors for common sources and storage backends. When you need to back up something that isn\u0026rsquo;t supported out of the box such as an internal database thats not commonly used or a custom data source, you can write your own connector in Go and install it like any other package.\nWhat you will build # A minimal Importer connector that backs up a single hardcoded file. This is the simplest possible integration — once you understand the pattern, you can extend it to walk directories, read from APIs, or consume any other data source.\nPrerequisites # Go 1.21 or later plakar installed and available in your $PATH 1. Set up the project # Create a new Go module for your plugin:\nmkdir plakar-myimporter cd plakar-myimporter go mod init github.com/yourorg/plakar-myimporter Install the two required dependencies:\ngo get github.com/PlakarKorp/kloset go get github.com/PlakarKorp/go-kloset-sdk Create the project structure:\nplakar-myimporter/ ├── connector.go ├── importer/ │ └── main.go ├── manifest.yaml ├── Makefile ├── go.mod └── go.sum 2. Implement the connector # Create connector.go:\npackage connector import ( \u0026#34;context\u0026#34; \u0026#34;io\u0026#34; \u0026#34;os\u0026#34; \u0026#34;path/filepath\u0026#34; \u0026#34;github.com/PlakarKorp/kloset/connectors\u0026#34; \u0026#34;github.com/PlakarKorp/kloset/connectors/importer\u0026#34; \u0026#34;github.com/PlakarKorp/kloset/location\u0026#34; \u0026#34;github.com/PlakarKorp/kloset/objects\u0026#34; ) const FILE = \u0026#34;/home/user/Documents/notes.md\u0026#34; func init() { importer.Register(\u0026#34;test\u0026#34;, location.FLAG_LOCALFS, NewImporter) } type testConnector struct{} func NewImporter(ctx context.Context, opts *connectors.Options, proto string, config map[string]string) (importer.Importer, error) { return \u0026amp;testConnector{}, nil } func (f *testConnector) Root() string { return filepath.Dir(FILE) } func (f *testConnector) Origin() string { return \u0026#34;localhost\u0026#34; } func (f *testConnector) Type() string { return \u0026#34;test\u0026#34; } func (f *testConnector) Flags() location.Flags { return location.FLAG_LOCALFS } func (f *testConnector) Ping(_ context.Context) error { return nil } func (f *testConnector) Close(_ context.Context) error { return nil } func (f *testConnector) Import(ctx context.Context, records chan\u0026lt;- *connectors.Record, results \u0026lt;-chan *connectors.Result) error { defer close(records) info, err := os.Stat(FILE) if err != nil { return err } fi := objects.FileInfo{ Lname: filepath.Base(FILE), Lsize: info.Size(), Lmode: info.Mode(), LmodTime: info.ModTime(), Ldev: 1, } records \u0026lt;- connectors.NewRecord(FILE, \u0026#34;\u0026#34;, fi, nil, func() (io.ReadCloser, error) { return os.Open(FILE) }) return nil } Writing to console Never write to os.Stdout. Plakar communicates with the plugin over gRPC through stdin/stdout — any writes there corrupt the stream. Use os.Stderr for debug output instead.\n3. Create the entrypoint # Create importer/main.go:\npackage main import ( \u0026#34;os\u0026#34; sdk \u0026#34;github.com/PlakarKorp/go-kloset-sdk\u0026#34; connector \u0026#34;github.com/yourorg/plakar-myimporter\u0026#34; ) func main() { sdk.EntrypointImporter(os.Args, connector.NewImporter) } 4. Write the manifest # Create manifest.yaml:\nname: test display_name: Test description: A minimal importer connector that backs up a single file. homepage: https://github.com/yourorg/plakar-myimporter license: ISC api_version: v1.1.0 version: v0.1.0 tier: third-party contact: mailto:you@example.com tags: [filesystem] connectors: - type: importer executable: test-importer protocols: [test] location_flags: [localfs] class: filesystem subclass: test validator: ./importer/schema.json args: [] extra_files: [] Not all fields are required for every integration. tags is optional metadata used for discovery. Under each connector, validator is only needed if your connector accepts a configuration schema; args and extra_files can be omitted entirely if you have no additional arguments to pass to the executable or no supplementary files to bundle. A minimal connector entry needs only type, executable and protocols.\nThe executable value must match the binary name you produce in the build step. The location_flags list must reflect the location.Flags returned by your connector\u0026rsquo;s Flags() method. Set class and subclass to values that best describe your data source — for a connector that reads from a local filesystem path, filesystem and your protocol name are appropriate choices.\n5. Build the plugin # Create a Makefile:\nbuild: go build -o test-importer ./importer Then build:\nmake build 6. Package and install # Package the plugin into a .ptar file:\nplakar pkg create Install it:\nplakar pkg add test-v0.1.0.ptar Verify the installation:\nplakar pkg show You should see test listed.\n7. Use the connector # Back up using your new importer:\nplakar at /var/backups backup test:// Because this connector uses a hardcoded file path, the location after test:// is ignored — the importer always reads from /home/user/Documents/notes.md.\nNext steps # Walking a directory — instead of a hardcoded path, parse the location from the config map (strings.TrimPrefix(config[\u0026quot;location\u0026quot;], proto+\u0026quot;://\u0026quot;)) and use filepath.WalkDir to send a record for each file.\nRemote sources — for connectors that talk to an API, use 0 as the flags value instead of location.FLAG_LOCALFS, and parse credentials and endpoints from the config map passed to your constructor.\nStreaming imports — if your source cannot be replayed (e.g. reading from a pipe or tarball), add location.FLAG_STREAM to your flags. Plakar will disable the progress bar and call Import only once.\nAdding an Exporter or Storage backend — implement the Exporter or Store interface, register it in init(), add a corresponding entrypoint directory, and add an entry to manifest.yaml.\nSee the SDK reference and the integration example repository for the full interface definitions and a complete working implementation.\n","date":"16 April 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/creating-a-custom-connector/","section":"Docs","summary":"Step-by-step guide to implement and install your own Plakar connector (importer) in Go.","title":"Creating a custom connector","type":"docs"},{"content":" Google Drive # Google Drive is a widely used cloud storage service provided by Google, offering users the ability to store files, share documents, and collaborate in real time.\nRclone is a command-line program to manage files on cloud storage, and supports Google Drive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Google Drive remote must be configured. Typical use cases\nCold backup of Google Drive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Google Drive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with Google Drive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Google Drive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Google Drive\u0026rdquo; from the list of supported storage providers. Leave client_id and client_secret empty to use Rclone\u0026rsquo;s defaults, or provide your own if you have them. Select the number corresponding to \u0026ldquo;Full access all files, excluding Application Data Folder.\u0026rdquo;, or to \u0026ldquo;Read-only access to file metadata and file contents.\u0026rdquo; if you only need read access. Leave the service account file empty unless you have one. Stay with the current settings, and do not edit advanced config. Choose to open the browser for authentication. Set whether to use a shared drive or not depending on your needs. Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Google Drive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Google Drive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Google Drive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # Google Drive API has rate limits, heavy usage may require throttling. File version history is not preserved. Only the current version of each file is snapshotted. Shared links and permissions are not preserved in snapshots. See also # Rclone documentation for Google Drive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/main/integrations/googledrive/","section":"Docs","summary":"Back up and restore your Google Drive with Plakar, and host Kloset stores in Google Drive.","title":"Google Drive","type":"docs"},{"content":" Google Drive # Google Drive is a widely used cloud storage service provided by Google, offering users the ability to store files, share documents, and collaborate in real time.\nRclone is a command-line program to manage files on cloud storage, and supports Google Drive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Google Drive remote must be configured. Typical use cases\nCold backup of Google Drive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Google Drive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with Google Drive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Google Drive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Google Drive\u0026rdquo; from the list of supported storage providers. Leave client_id and client_secret empty to use Rclone\u0026rsquo;s defaults, or provide your own if you have them. Select the number corresponding to \u0026ldquo;Full access all files, excluding Application Data Folder.\u0026rdquo;, or to \u0026ldquo;Read-only access to file metadata and file contents.\u0026rdquo; if you only need read access. Leave the service account file empty unless you have one. Stay with the current settings, and do not edit advanced config. Choose to open the browser for authentication. Set whether to use a shared drive or not depending on your needs. Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Google Drive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Google Drive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Google Drive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # Google Drive API has rate limits, heavy usage may require throttling. File version history is not preserved. Only the current version of each file is snapshotted. Shared links and permissions are not preserved in snapshots. See also # Rclone documentation for Google Drive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/integrations/googledrive/","section":"Docs","summary":"Back up and restore your Google Drive with Plakar, and host Kloset stores in Google Drive.","title":"Google Drive","type":"docs"},{"content":" Google Drive # Google Drive is a widely used cloud storage service provided by Google, offering users the ability to store files, share documents, and collaborate in real time.\nRclone is a command-line program to manage files on cloud storage, and supports Google Drive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Google Drive remote must be configured. Typical use cases\nCold backup of Google Drive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Google Drive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with Google Drive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Google Drive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Google Drive\u0026rdquo; from the list of supported storage providers. Leave client_id and client_secret empty to use Rclone\u0026rsquo;s defaults, or provide your own if you have them. Select the number corresponding to \u0026ldquo;Full access all files, excluding Application Data Folder.\u0026rdquo;, or to \u0026ldquo;Read-only access to file metadata and file contents.\u0026rdquo; if you only need read access. Leave the service account file empty unless you have one. Stay with the current settings, and do not edit advanced config. Choose to open the browser for authentication. Set whether to use a shared drive or not depending on your needs. Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Google Drive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Google Drive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Google Drive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # Google Drive API has rate limits, heavy usage may require throttling. File version history is not preserved. Only the current version of each file is snapshotted. Shared links and permissions are not preserved in snapshots. See also # Rclone documentation for Google Drive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/integrations/googledrive/","section":"Docs","summary":"Back up and restore your Google Drive with Plakar, and host Kloset stores in Google Drive.","title":"Google Drive","type":"docs"},{"content":" Google Drive # Google Drive is a widely used cloud storage service provided by Google, offering users the ability to store files, share documents, and collaborate in real time.\nRclone is a command-line program to manage files on cloud storage, and supports Google Drive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Google Drive remote must be configured. Typical use cases\nCold backup of Google Drive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Google Drive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with Google Drive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Google Drive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Google Drive\u0026rdquo; from the list of supported storage providers. Leave client_id and client_secret empty to use Rclone\u0026rsquo;s defaults, or provide your own if you have them. Select the number corresponding to \u0026ldquo;Full access all files, excluding Application Data Folder.\u0026rdquo;, or to \u0026ldquo;Read-only access to file metadata and file contents.\u0026rdquo; if you only need read access. Leave the service account file empty unless you have one. Stay with the current settings, and do not edit advanced config. Choose to open the browser for authentication. Set whether to use a shared drive or not depending on your needs. Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Google Drive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Google Drive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Google Drive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # Google Drive API has rate limits, heavy usage may require throttling. File version history is not preserved. Only the current version of each file is snapshotted. Shared links and permissions are not preserved in snapshots. See also # Rclone documentation for Google Drive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/integrations/googledrive/","section":"Docs","summary":"Back up and restore your Google Drive with Plakar, and host Kloset stores in Google Drive.","title":"Google Drive","type":"docs"},{"content":" Logging In to Plakar # Plakar works without an account by default. Logging in is optional but it unlocks additional features such as installing pre-built packages hosted on Plakar\u0026rsquo;s servers (so you don\u0026rsquo;t have to build them from source) and alerting service which can notify you by email on important issues like when a backup fail.\nLogging In # Using GitHub # $ plakar login -github Using Email # $ plakar login -email myemail@domain.com Enabling Alerting # After logging in, enable alerting to send backup metadata to Plakar\u0026rsquo;s servers for reporting:\n$ plakar service enable alerting Enable email notifications:\n$ plakar service set alerting report.email=true Alerting sends non-sensitive metadata (backup status, timestamps, sizes) to power the reporting dashboard and email notifications. Your backup data never leaves your system.\nInstalling Pre-Built Packages # Once logged in, you install pre-built integration packages hosted on Plakar\u0026rsquo;s servers:\n$ plakar pkg add s3 $ plakar pkg add sftp $ plakar pkg add rclone Without logging in, you can still build these integrations from source.\nVerify Login Status # Check if you\u0026rsquo;re logged in:\n$ plakar login --status This displays your login status.\n","date":"18 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/guides/logging-in-to-plakar/","section":"Docs","summary":"Log in to unlock optional features like pre-built package installation and alerting.","title":"Logging In to Plakar","type":"docs"},{"content":" Logging In to Plakar # Plakar works without an account by default. Logging in is optional but it unlocks additional features such as installing pre-built packages hosted on Plakar\u0026rsquo;s servers (so you don\u0026rsquo;t have to build them from source) and alerting service which can notify you by email on important issues like when a backup fail.\nLogging In # Using GitHub # $ plakar login -github Using Email # $ plakar login -email myemail@domain.com Enabling Alerting # After logging in, enable alerting to send backup metadata to Plakar\u0026rsquo;s servers for reporting:\n$ plakar service enable alerting Enable email notifications:\n$ plakar service set alerting report.email=true Alerting sends non-sensitive metadata (backup status, timestamps, sizes) to power the reporting dashboard and email notifications. Your backup data never leaves your system.\nInstalling Pre-Built Packages # Once logged in, you install pre-built integration packages hosted on Plakar\u0026rsquo;s servers:\n$ plakar pkg add s3 $ plakar pkg add sftp $ plakar pkg add rclone Without logging in, you can still build these integrations from source.\nVerify Login Status # Check if you\u0026rsquo;re logged in:\n$ plakar login --status This displays your login status.\n","date":"18 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/logging-in-to-plakar/","section":"Docs","summary":"Log in to unlock optional features like pre-built package installation and alerting.","title":"Logging In to Plakar","type":"docs"},{"content":" Managing packages # Integration packages extend Plakar with connectors for cloud storage providers, databases, and other systems. This guide covers the full lifecycle of a package: installing, listing, upgrading, and removing.\nPlakar ships intentionally clean with only base connectors such as the filesystem connector. Plakar can be extended using integrations such as S3, SFTP, PostgreSQL, or any other integration only when you need it, keeping the base install small and dependency-free.\nIntegrations are also versioned independently from Plakar itself, so you can pin a connector to a specific version or upgrade it without touching the rest of your setup.\nList installed packages # To see which packages are currently installed:\n$ plakar pkg list Install a package # Pre-built package # Pre-built packages are hosted on Plakar\u0026rsquo;s infrastructure and require you to be logged in to download them. To log in:\n$ plakar login Passphrase In v1.0.6 and below, only interactive login is supported. Non-interactive and token-based login are available from v1.1.0 and above.\nOnce logged in, install a package by name from the official plugin registry (e.g. the S3 integration):\n$ plakar pkg add s3 Local archive # If you built the package from source or have a .ptar file on hand, pass the path directly:\n$ plakar pkg add ./s3_v1.0.0_darwin_arm64.ptar This does not require a Plakar account.\nUpgrade a package # To upgrade to the latest available version, remove the existing package and reinstall it:\n$ plakar pkg rm s3 $ plakar pkg add s3 Upgrading preserves existing store, source, and destination configurations.\nRemove a package # $ plakar pkg rm s3 ","date":"10 April 2026","externalUrl":null,"permalink":"/docs/v1.0.5/guides/managing-packages/","section":"Docs","summary":"How to install, upgrade, and remove Plakar integration packages.","title":"Managing packages","type":"docs"},{"content":" Managing packages # Integration packages extend Plakar with connectors for cloud storage providers, databases, and other systems. This guide covers the full lifecycle of a package: installing, listing, upgrading, and removing.\nPlakar ships intentionally clean with only base connectors such as the filesystem connector. Plakar can be extended using integrations such as S3, SFTP, PostgreSQL, or any other integration only when you need it, keeping the base install small and dependency-free.\nIntegrations are also versioned independently from Plakar itself, so you can pin a connector to a specific version or upgrade it without touching the rest of your setup.\nList installed packages # To see which packages are currently installed:\n$ plakar pkg list Install a package # Pre-built package # Pre-built packages are hosted on Plakar\u0026rsquo;s infrastructure and require you to be logged in to download them. To log in:\n$ plakar login Passphrase In v1.0.6 and below, only interactive login is supported. Non-interactive and token-based login are available from v1.1.0 and above.\nOnce logged in, install a package by name from the official plugin registry (e.g. the S3 integration):\n$ plakar pkg add s3 Local archive # If you built the package from source or have a .ptar file on hand, pass the path directly:\n$ plakar pkg add ./s3_v1.0.0_darwin_arm64.ptar This does not require a Plakar account.\nUpgrade a package # To upgrade to the latest available version, remove the existing package and reinstall it:\n$ plakar pkg rm s3 $ plakar pkg add s3 Upgrading preserves existing store, source, and destination configurations.\nRemove a package # $ plakar pkg rm s3 ","date":"10 April 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/managing-packages/","section":"Docs","summary":"How to install, upgrade, and remove Plakar integration packages.","title":"Managing packages","type":"docs"},{"content":" OneDrive # OneDrive is a widely used cloud storage service provided by Microsoft, offering users the ability to store files, share documents, and collaborate in real time.\nRclone is a command-line program to manage files on cloud storage, and supports OneDrive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one OneDrive remote must be configured. Typical use cases\nCold backup of OneDrive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with OneDrive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with OneDrive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for OneDrive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Microsoft OneDrive\u0026rdquo; from the list of supported storage providers. Leave client_id and client_secret empty to use Rclone\u0026rsquo;s defaults, or provide your own if you have them. Select your region (usually \u0026ldquo;Microsoft Cloud Global\u0026rdquo;). Enter service principal\u0026rsquo;s tenant ID if applicable, or leave empty. Stay with the current settings, and do not edit advanced config. Choose to open the browser for authentication. Once validated, select the type of connection. Select the drive to use. Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your OneDrive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your OneDrive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with OneDrive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # OneDrive API has rate limits, heavy usage may require throttling. File version history is not preserved. Only the current version of each file is snapshotted. Shared links and permissions are not preserved in snapshots. See also # Rclone documentation for OneDrive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/main/integrations/onedrive/","section":"Docs","summary":"Back up and restore your OneDrive with Plakar, and host Kloset stores in OneDrive.","title":"OneDrive","type":"docs"},{"content":" OneDrive # OneDrive is a widely used cloud storage service provided by Microsoft, offering users the ability to store files, share documents, and collaborate in real time.\nRclone is a command-line program to manage files on cloud storage, and supports OneDrive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one OneDrive remote must be configured. Typical use cases\nCold backup of OneDrive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with OneDrive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with OneDrive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for OneDrive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Microsoft OneDrive\u0026rdquo; from the list of supported storage providers. Leave client_id and client_secret empty to use Rclone\u0026rsquo;s defaults, or provide your own if you have them. Select your region (usually \u0026ldquo;Microsoft Cloud Global\u0026rdquo;). Enter service principal\u0026rsquo;s tenant ID if applicable, or leave empty. Stay with the current settings, and do not edit advanced config. Choose to open the browser for authentication. Once validated, select the type of connection. Select the drive to use. Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your OneDrive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your OneDrive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with OneDrive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # OneDrive API has rate limits, heavy usage may require throttling. File version history is not preserved. Only the current version of each file is snapshotted. Shared links and permissions are not preserved in snapshots. See also # Rclone documentation for OneDrive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/integrations/onedrive/","section":"Docs","summary":"Back up and restore your OneDrive with Plakar, and host Kloset stores in OneDrive.","title":"OneDrive","type":"docs"},{"content":" OneDrive # OneDrive is a widely used cloud storage service provided by Microsoft, offering users the ability to store files, share documents, and collaborate in real time.\nRclone is a command-line program to manage files on cloud storage, and supports OneDrive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one OneDrive remote must be configured. Typical use cases\nCold backup of OneDrive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with OneDrive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with OneDrive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for OneDrive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Microsoft OneDrive\u0026rdquo; from the list of supported storage providers. Leave client_id and client_secret empty to use Rclone\u0026rsquo;s defaults, or provide your own if you have them. Select your region (usually \u0026ldquo;Microsoft Cloud Global\u0026rdquo;). Enter service principal\u0026rsquo;s tenant ID if applicable, or leave empty. Stay with the current settings, and do not edit advanced config. Choose to open the browser for authentication. Once validated, select the type of connection. Select the drive to use. Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your OneDrive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your OneDrive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with OneDrive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # OneDrive API has rate limits, heavy usage may require throttling. File version history is not preserved. Only the current version of each file is snapshotted. Shared links and permissions are not preserved in snapshots. See also # Rclone documentation for OneDrive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/integrations/onedrive/","section":"Docs","summary":"Back up and restore your OneDrive with Plakar, and host Kloset stores in OneDrive.","title":"OneDrive","type":"docs"},{"content":" OneDrive # OneDrive is a widely used cloud storage service provided by Microsoft, offering users the ability to store files, share documents, and collaborate in real time.\nRclone is a command-line program to manage files on cloud storage, and supports OneDrive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one OneDrive remote must be configured. Typical use cases\nCold backup of OneDrive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with OneDrive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with OneDrive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for OneDrive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter the number corresponding to \u0026ldquo;Microsoft OneDrive\u0026rdquo; from the list of supported storage providers. Leave client_id and client_secret empty to use Rclone\u0026rsquo;s defaults, or provide your own if you have them. Select your region (usually \u0026ldquo;Microsoft Cloud Global\u0026rdquo;). Enter service principal\u0026rsquo;s tenant ID if applicable, or leave empty. Stay with the current settings, and do not edit advanced config. Choose to open the browser for authentication. Once validated, select the type of connection. Select the drive to use. Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your OneDrive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your OneDrive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with OneDrive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # OneDrive API has rate limits, heavy usage may require throttling. File version history is not preserved. Only the current version of each file is snapshotted. Shared links and permissions are not preserved in snapshots. See also # Rclone documentation for OneDrive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/integrations/onedrive/","section":"Docs","summary":"Back up and restore your OneDrive with Plakar, and host Kloset stores in OneDrive.","title":"OneDrive","type":"docs"},{"content":" Logging In to Plakar # Plakar works without an account by default. Logging in is optional but it unlocks additional features such as installing pre-built packages hosted on Plakar\u0026rsquo;s servers (so you don\u0026rsquo;t have to build them from source) and alerting service which can notify you by email on important issues like when a backup fail.\nLogging In # Using GitHub # $ plakar login -github Using Email # $ plakar login -email myemail@domain.com Enabling Alerting # After logging in, enable alerting to send backup metadata to Plakar\u0026rsquo;s servers for reporting:\n$ plakar service enable alerting Enable email notifications:\n$ plakar service set alerting report.email=true Alerting sends non-sensitive metadata (backup status, timestamps, sizes) to power the reporting dashboard and email notifications. Your backup data never leaves your system.\nNon-Interactive Login # For CI pipelines, remote servers, or automated jobs where interactive login isn\u0026rsquo;t possible, use token-based authentication.\nGenerate a Token # On a machine where you can log in interactively:\n$ plakar login $ plakar token create This outputs a token:\neyJhbGc...... Use the Token # On the non-interactive system, set the environment variable:\n$ export PLAKAR_TOKEN=eyJhbGc...... Plakar automatically uses this token for authentication.\nPersist the Token # To save the token in the local configuration:\n$ plakar login -env This reads PLAKAR_TOKEN from the environment and stores it in Plakar\u0026rsquo;s configuration file.\nInstalling Pre-Built Packages # Once logged in, you install pre-built integration packages hosted on Plakar\u0026rsquo;s servers:\n$ plakar pkg add s3 $ plakar pkg add sftp $ plakar pkg add rclone Without logging in, you can still build these integrations from source.\nVerify Login Status # Check if you\u0026rsquo;re logged in:\n$ plakar login --status This displays your login status.\n","date":"18 March 2026","externalUrl":null,"permalink":"/docs/main/guides/logging-in-to-plakar/","section":"Docs","summary":"Log in to unlock optional features like pre-built package installation and alerting.","title":"Logging In to Plakar","type":"docs"},{"content":" Logging In to Plakar # Plakar works without an account by default. Logging in is optional but it unlocks additional features such as installing pre-built packages hosted on Plakar\u0026rsquo;s servers (so you don\u0026rsquo;t have to build them from source) and alerting service which can notify you by email on important issues like when a backup fail.\nLogging In # Using GitHub # $ plakar login -github Using Email # $ plakar login -email myemail@domain.com Enabling Alerting # After logging in, enable alerting to send backup metadata to Plakar\u0026rsquo;s servers for reporting:\n$ plakar service enable alerting Enable email notifications:\n$ plakar service set alerting report.email=true Alerting sends non-sensitive metadata (backup status, timestamps, sizes) to power the reporting dashboard and email notifications. Your backup data never leaves your system.\nNon-Interactive Login # For CI pipelines, remote servers, or automated jobs where interactive login isn\u0026rsquo;t possible, use token-based authentication.\nGenerate a Token # On a machine where you can log in interactively:\n$ plakar login $ plakar token create This outputs a token:\neyJhbGc...... Use the Token # On the non-interactive system, set the environment variable:\n$ export PLAKAR_TOKEN=eyJhbGc...... Plakar automatically uses this token for authentication.\nPersist the Token # To save the token in the local configuration:\n$ plakar login -env This reads PLAKAR_TOKEN from the environment and stores it in Plakar\u0026rsquo;s configuration file.\nInstalling Pre-Built Packages # Once logged in, you install pre-built integration packages hosted on Plakar\u0026rsquo;s servers:\n$ plakar pkg add s3 $ plakar pkg add sftp $ plakar pkg add rclone Without logging in, you can still build these integrations from source.\nVerify Login Status # Check if you\u0026rsquo;re logged in:\n$ plakar login --status This displays your login status.\n","date":"18 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/logging-in-to-plakar/","section":"Docs","summary":"Log in to unlock optional features like pre-built package installation and alerting.","title":"Logging In to Plakar","type":"docs"},{"content":" Pruning Snapshots # plakar prune removes snapshots from a Kloset store. Snapshots can be selected for removal by age, tag, or retention policy.\nEvery backup you run creates a new snapshot in your Kloset store. If its feft unchecked, snapshots can accumulate indefinitely.\nPruning lets you define how much history you actually need such as keeping hourly snapshots for the past week, daily ones for the past month, monthly ones for the past year and discard everything else. This keeps your store from growing without bound while preserving important recovery points.\nBy default, plakar prune runs in dry-run mode and makes no changes, you\u0026rsquo;ll need to pass -apply to execute the operation.\nOn this guide, we will use a Kloset store is located at $HOME/backups, but your store can be located anywhere else:\nPreviewing what would be pruned # Before removing anything, check which snapshots would be affected:\n$ plakar at $HOME/backups prune -days 30 No snapshots are deleted without -apply. The output shows what would be removed.\nRemoving snapshots by age # To delete snapshots older than 30 days:\n$ plakar at $HOME/backups prune -days 30 -apply You can use other flags like -weeks, -months, or -years to specify age.\nRemoving snapshots by tag # To delete snapshots older than 30 days that carry a specific tag:\n$ plakar at $HOME/backups prune -days 30 -tag daily-backup -apply Only snapshots matching the tag are considered. Others are left untouched.\nApplying a retention policy # A retention policy keeps a defined number of snapshots across different time windows and deletes everything else. This is the most common way to keep a store bounded over time.\n$ plakar at $HOME/backups prune \\ -days 1 -per-day 7 \\ -weeks 4 -per-week 1 \\ -months 12 -per-month 1 \\ -years 5 -per-year 1 \\ -apply This can be broken down as:\n-days 1 -per-day 7 — For the past day, keep up to 7 snapshots. This preserves frequent checkpoints in the most recent period. -weeks 4 -per-week 1 — For the past 4 weeks, keep 1 snapshot per week. Older intra-day snapshots are pruned down to a single representative per week. -months 12 -per-month 1 — For the past 12 months, keep 1 snapshot per month. -years 5 -per-year 1 — For the past 5 years, keep 1 snapshot per year. Everything outside those windows is deleted.\nUsing a named policy # Rather than specifying retention parameters on the command line each time, a named policy can be defined once and reused.\nYou can create a policy and configure its retention parameters:\n$ plakar policy add weekly $ plakar policy set weekly since=\u0026#39;3 months\u0026#39; $ plakar policy set weekly per-week=1 Then apply the policy:\n$ plakar at $HOME/backups prune -policy weekly -apply Managing policies # $ plakar policy show # list all policies (YAML by default) $ plakar policy show -json # output as JSON $ plakar policy show weekly # inspect a specific policy $ plakar policy set weekly per-week=2 # update a parameter $ plakar policy unset weekly per-week # remove a parameter $ plakar policy rm weekly # delete a policy Reclaiming storage after pruning # Pruning removes snapshots, but does not immediately free storage. Because Plakar deduplicates data across snapshots, the underlying chunks and packfiles remain until plakar maintenance runs (also consider the maintenance grace period).\nAfter pruning, run plakar maintenance to reclaim the freed space:\n$ plakar maintenance See also # plakar policy How maintenance works ","date":"24 April 2026","externalUrl":null,"permalink":"/docs/v1.0.5/guides/using-plakar-prune/","section":"Docs","summary":"Remove old snapshots from a Kloset store using age, tags, or retention policies.","title":"Pruning snapshots","type":"docs"},{"content":" Pruning Snapshots # plakar prune removes snapshots from a Kloset store. Snapshots can be selected for removal by age, tag, or retention policy.\nEvery backup you run creates a new snapshot in your Kloset store. If its feft unchecked, snapshots can accumulate indefinitely.\nPruning lets you define how much history you actually need such as keeping hourly snapshots for the past week, daily ones for the past month, monthly ones for the past year and discard everything else. This keeps your store from growing without bound while preserving important recovery points.\nBy default, plakar prune runs in dry-run mode and makes no changes, you\u0026rsquo;ll need to pass -apply to execute the operation.\nOn this guide, we will use a Kloset store is located at $HOME/backups, but your store can be located anywhere else:\nPreviewing what would be pruned # Before removing anything, check which snapshots would be affected:\n$ plakar at $HOME/backups prune -days 30 No snapshots are deleted without -apply. The output shows what would be removed.\nRemoving snapshots by age # To delete snapshots older than 30 days:\n$ plakar at $HOME/backups prune -days 30 -apply You can use other flags like -weeks, -months, or -years to specify age.\nRemoving snapshots by tag # To delete snapshots older than 30 days that carry a specific tag:\n$ plakar at $HOME/backups prune -days 30 -tag daily-backup -apply Only snapshots matching the tag are considered. Others are left untouched.\nApplying a retention policy # A retention policy keeps a defined number of snapshots across different time windows and deletes everything else. This is the most common way to keep a store bounded over time.\n$ plakar at $HOME/backups prune \\ -days 1 -per-day 7 \\ -weeks 4 -per-week 1 \\ -months 12 -per-month 1 \\ -years 5 -per-year 1 \\ -apply This can be broken down as:\n-days 1 -per-day 7 — For the past day, keep up to 7 snapshots. This preserves frequent checkpoints in the most recent period. -weeks 4 -per-week 1 — For the past 4 weeks, keep 1 snapshot per week. Older intra-day snapshots are pruned down to a single representative per week. -months 12 -per-month 1 — For the past 12 months, keep 1 snapshot per month. -years 5 -per-year 1 — For the past 5 years, keep 1 snapshot per year. Everything outside those windows is deleted.\nUsing a named policy # Rather than specifying retention parameters on the command line each time, a named policy can be defined once and reused.\nYou can create a policy and configure its retention parameters:\n$ plakar policy add weekly $ plakar policy set weekly since=\u0026#39;3 months\u0026#39; $ plakar policy set weekly per-week=1 Then apply the policy:\n$ plakar at $HOME/backups prune -policy weekly -apply Managing policies # $ plakar policy show # list all policies (YAML by default) $ plakar policy show -json # output as JSON $ plakar policy show weekly # inspect a specific policy $ plakar policy set weekly per-week=2 # update a parameter $ plakar policy unset weekly per-week # remove a parameter $ plakar policy rm weekly # delete a policy Reclaiming storage after pruning # Pruning removes snapshots, but does not immediately free storage. Because Plakar deduplicates data across snapshots, the underlying chunks and packfiles remain until plakar maintenance runs (also consider the maintenance grace period).\nAfter pruning, run plakar maintenance to reclaim the freed space:\n$ plakar maintenance See also # plakar policy How maintenance works ","date":"24 April 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/using-plakar-prune/","section":"Docs","summary":"Remove old snapshots from a Kloset store using age, tags, or retention policies.","title":"Pruning snapshots","type":"docs"},{"content":" Managing packages # Integration packages extend Plakar with connectors for cloud storage providers, databases, and other systems. This guide covers the full lifecycle of a package: installing, listing, upgrading, and removing.\nPlakar ships intentionally clean with only base connectors such as the filesystem connector. Plakar can be extended using integrations such as S3, SFTP, PostgreSQL, or any other integration only when you need it, keeping the base install small and dependency-free.\nIntegrations are also versioned independently from Plakar itself, so you can pin a connector to a specific version or upgrade it without touching the rest of your setup.\nList installed packages # To see which packages are currently installed:\n$ plakar pkg list Install a package # Pre-built package # Pre-built packages are hosted on Plakar\u0026rsquo;s infrastructure and require you to be logged in to download them. If you are not logged in, plakar pkg add will fail with an authentication error.\nTo log in:\n$ plakar login For CI pipelines or automated environments where interactive login is not possible, see Logging In to Plakar.\nOnce logged in, install a package by name from the official plugin registry (e.g. the S3 integration):\n$ plakar pkg add s3 To install a specific version:\n$ plakar pkg add s3@v1.0.0 Building from source # If you are not logged in or prefer not to use pre-built packages, you can build packages locally with plakar pkg build. This does not require a Plakar account but does require a working Go toolchain and make.\n$ plakar pkg build s3 On success, a .ptar archive is generated in the current directory. Install it with:\n$ plakar pkg add ./s3_v1.0.0_darwin_arm64.ptar Upgrade a package # To upgrade a specific package to the latest available version:\n$ plakar pkg add -u s3 To upgrade all installed packages at once:\n$ plakar pkg add -u Upgrading preserves existing store, source, and destination configurations.\nRemove a package # $ plakar pkg rm s3 ","date":"10 April 2026","externalUrl":null,"permalink":"/docs/main/guides/managing-packages/","section":"Docs","summary":"How to install, upgrade, and remove Plakar integration packages.","title":"Managing packages","type":"docs"},{"content":" Managing packages # Integration packages extend Plakar with connectors for cloud storage providers, databases, and other systems. This guide covers the full lifecycle of a package: installing, listing, upgrading, and removing.\nPlakar ships intentionally clean with only base connectors such as the filesystem connector. Plakar can be extended using integrations such as S3, SFTP, PostgreSQL, or any other integration only when you need it, keeping the base install small and dependency-free.\nIntegrations are also versioned independently from Plakar itself, so you can pin a connector to a specific version or upgrade it without touching the rest of your setup.\nList installed packages # To see which packages are currently installed:\n$ plakar pkg list Install a package # Pre-built package # Pre-built packages are hosted on Plakar\u0026rsquo;s infrastructure and require you to be logged in to download them. If you are not logged in, plakar pkg add will fail with an authentication error.\nTo log in:\n$ plakar login For CI pipelines or automated environments where interactive login is not possible, see Logging In to Plakar.\nOnce logged in, install a package by name from the official plugin registry (e.g. the S3 integration):\n$ plakar pkg add s3 To install a specific version:\n$ plakar pkg add s3@v1.0.0 Building from source # If you are not logged in or prefer not to use pre-built packages, you can build packages locally with plakar pkg build. This does not require a Plakar account but does require a working Go toolchain and make.\n$ plakar pkg build s3 On success, a .ptar archive is generated in the current directory. Install it with:\n$ plakar pkg add ./s3_v1.0.0_darwin_arm64.ptar Upgrade a package # To upgrade a specific package to the latest available version:\n$ plakar pkg add -u s3 To upgrade all installed packages at once:\n$ plakar pkg add -u Upgrading preserves existing store, source, and destination configurations.\nRemove a package # $ plakar pkg rm s3 ","date":"10 April 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/managing-packages/","section":"Docs","summary":"How to install, upgrade, and remove Plakar integration packages.","title":"Managing packages","type":"docs"},{"content":" OpenDrive # The OpenDrive integration for Plakar lets you back up and restore data from OpenDrive, as well as host Kloset stores in OpenDrive, using Rclone.\nRclone is a command-line program to manage files on cloud storage, and supports OpenDrive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one OpenDrive remote must be configured. Typical use cases\nCold backup of OpenDrive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with OpenDrive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with OpenDrive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for OpenDrive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter your username (your OpenDrive email). Enter your password. Confirm the settings. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your OpenDrive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your OpenDrive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with OpenDrive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up OpenDrive directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to OpenDrive directories.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nSee also # Rclone documentation for OpenDrive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/main/integrations/opendrive/","section":"Docs","summary":"Back up and restore OpenDrive data with Plakar, and host Kloset stores in OpenDrive.","title":"OpenDrive","type":"docs"},{"content":" OpenDrive # The OpenDrive integration for Plakar lets you back up and restore data from OpenDrive, as well as host Kloset stores in OpenDrive, using Rclone.\nRclone is a command-line program to manage files on cloud storage, and supports OpenDrive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one OpenDrive remote must be configured. Typical use cases\nCold backup of OpenDrive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with OpenDrive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with OpenDrive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for OpenDrive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter your username (your OpenDrive email). Enter your password. Confirm the settings. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your OpenDrive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your OpenDrive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with OpenDrive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up OpenDrive directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to OpenDrive directories.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nSee also # Rclone documentation for OpenDrive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/integrations/opendrive/","section":"Docs","summary":"Back up and restore OpenDrive data with Plakar, and host Kloset stores in OpenDrive.","title":"OpenDrive","type":"docs"},{"content":" OpenDrive # The OpenDrive integration for Plakar lets you back up and restore data from OpenDrive, as well as host Kloset stores in OpenDrive, using Rclone.\nRclone is a command-line program to manage files on cloud storage, and supports OpenDrive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one OpenDrive remote must be configured. Typical use cases\nCold backup of OpenDrive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with OpenDrive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with OpenDrive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for OpenDrive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter your username (your OpenDrive email). Enter your password. Confirm the settings. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your OpenDrive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your OpenDrive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with OpenDrive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up OpenDrive directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to OpenDrive directories.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nSee also # Rclone documentation for OpenDrive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/integrations/opendrive/","section":"Docs","summary":"Back up and restore OpenDrive data with Plakar, and host Kloset stores in OpenDrive.","title":"OpenDrive","type":"docs"},{"content":" OpenDrive # The OpenDrive integration for Plakar lets you back up and restore data from OpenDrive, as well as host Kloset stores in OpenDrive, using Rclone.\nRclone is a command-line program to manage files on cloud storage, and supports OpenDrive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one OpenDrive remote must be configured. Typical use cases\nCold backup of OpenDrive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with OpenDrive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with OpenDrive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for OpenDrive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter your username (your OpenDrive email). Enter your password. Confirm the settings. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your OpenDrive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your OpenDrive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with OpenDrive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # The Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up OpenDrive directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # The Rclone package provides a destination connector to restore snapshots to OpenDrive directories.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nSee also # Rclone documentation for OpenDrive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/integrations/opendrive/","section":"Docs","summary":"Back up and restore OpenDrive data with Plakar, and host Kloset stores in OpenDrive.","title":"OpenDrive","type":"docs"},{"content":" Pruning Snapshots # plakar prune removes snapshots from a Kloset store. Snapshots can be selected for removal by age, tag, or retention policy.\nEvery backup you run creates a new snapshot in your Kloset store. If its feft unchecked, snapshots can accumulate indefinitely.\nPruning lets you define how much history you actually need such as keeping hourly snapshots for the past week, daily ones for the past month, monthly ones for the past year and discard everything else. This keeps your store from growing without bound while preserving important recovery points.\nBy default, plakar prune runs in dry-run mode and makes no changes, you\u0026rsquo;ll need to pass -apply to execute the operation.\nOn this guide, we will use a Kloset store is located at $HOME/backups, but your store can be located anywhere else:\nPreviewing what would be pruned # Before removing anything, check which snapshots would be affected:\n$ plakar at $HOME/backups prune -days 30 No snapshots are deleted without -apply. The output shows what would be removed.\nRemoving snapshots by age # To delete snapshots older than 30 days:\n$ plakar at $HOME/backups prune -days 30 -apply You can use other flags like -weeks, -months, or -years to specify age.\nRemoving snapshots by tag # To delete snapshots older than 30 days that carry a specific tag:\n$ plakar at $HOME/backups prune -days 30 -tag daily-backup -apply Only snapshots matching the tag are considered. Others are left untouched.\nApplying a retention policy # A retention policy keeps a defined number of snapshots across different time windows and deletes everything else. This is the most common way to keep a store bounded over time.\n$ plakar at $HOME/backups prune \\ -days 1 -per-day 7 \\ -weeks 4 -per-week 1 \\ -months 12 -per-month 1 \\ -years 5 -per-year 1 \\ -apply This can be broken down as:\n-days 1 -per-day 7 — For the past day, keep up to 7 snapshots. This preserves frequent checkpoints in the most recent period. -weeks 4 -per-week 1 — For the past 4 weeks, keep 1 snapshot per week. Older intra-day snapshots are pruned down to a single representative per week. -months 12 -per-month 1 — For the past 12 months, keep 1 snapshot per month. -years 5 -per-year 1 — For the past 5 years, keep 1 snapshot per year. Everything outside those windows is deleted.\nUsing a named policy # Rather than specifying retention parameters on the command line each time, a named policy can be defined once and reused.\nYou can create a policy and configure its retention parameters:\n$ plakar policy add weekly $ plakar policy set weekly since=\u0026#39;3 months\u0026#39; $ plakar policy set weekly per-week=1 Then apply the policy:\n$ plakar at $HOME/backups prune -policy weekly -apply Managing policies # $ plakar policy show # list all policies (YAML by default) $ plakar policy show -json # output as JSON $ plakar policy show weekly # inspect a specific policy $ plakar policy set weekly per-week=2 # update a parameter $ plakar policy unset weekly per-week # remove a parameter $ plakar policy rm weekly # delete a policy Reclaiming storage after pruning # Pruning removes snapshots, but does not immediately free storage. Because Plakar deduplicates data across snapshots, the underlying chunks and packfiles remain until plakar maintenance runs (also consider the maintenance grace period).\nAfter pruning, run plakar maintenance to reclaim the freed space:\n$ plakar maintenance See also # plakar policy How maintenance works ","date":"24 April 2026","externalUrl":null,"permalink":"/docs/main/guides/using-plakar-prune/","section":"Docs","summary":"Remove old snapshots from a Kloset store using age, tags, or retention policies.","title":"Pruning snapshots","type":"docs"},{"content":" Pruning Snapshots # plakar prune removes snapshots from a Kloset store. Snapshots can be selected for removal by age, tag, or retention policy.\nEvery backup you run creates a new snapshot in your Kloset store. If its feft unchecked, snapshots can accumulate indefinitely.\nPruning lets you define how much history you actually need such as keeping hourly snapshots for the past week, daily ones for the past month, monthly ones for the past year and discard everything else. This keeps your store from growing without bound while preserving important recovery points.\nBy default, plakar prune runs in dry-run mode and makes no changes, you\u0026rsquo;ll need to pass -apply to execute the operation.\nOn this guide, we will use a Kloset store is located at $HOME/backups, but your store can be located anywhere else:\nPreviewing what would be pruned # Before removing anything, check which snapshots would be affected:\n$ plakar at $HOME/backups prune -days 30 No snapshots are deleted without -apply. The output shows what would be removed.\nRemoving snapshots by age # To delete snapshots older than 30 days:\n$ plakar at $HOME/backups prune -days 30 -apply You can use other flags like -weeks, -months, or -years to specify age.\nRemoving snapshots by tag # To delete snapshots older than 30 days that carry a specific tag:\n$ plakar at $HOME/backups prune -days 30 -tag daily-backup -apply Only snapshots matching the tag are considered. Others are left untouched.\nApplying a retention policy # A retention policy keeps a defined number of snapshots across different time windows and deletes everything else. This is the most common way to keep a store bounded over time.\n$ plakar at $HOME/backups prune \\ -days 1 -per-day 7 \\ -weeks 4 -per-week 1 \\ -months 12 -per-month 1 \\ -years 5 -per-year 1 \\ -apply This can be broken down as:\n-days 1 -per-day 7 — For the past day, keep up to 7 snapshots. This preserves frequent checkpoints in the most recent period. -weeks 4 -per-week 1 — For the past 4 weeks, keep 1 snapshot per week. Older intra-day snapshots are pruned down to a single representative per week. -months 12 -per-month 1 — For the past 12 months, keep 1 snapshot per month. -years 5 -per-year 1 — For the past 5 years, keep 1 snapshot per year. Everything outside those windows is deleted.\nUsing a named policy # Rather than specifying retention parameters on the command line each time, a named policy can be defined once and reused.\nYou can create a policy and configure its retention parameters:\n$ plakar policy add weekly $ plakar policy set weekly since=\u0026#39;3 months\u0026#39; $ plakar policy set weekly per-week=1 Then apply the policy:\n$ plakar at $HOME/backups prune -policy weekly -apply Managing policies # $ plakar policy show # list all policies (YAML by default) $ plakar policy show -json # output as JSON $ plakar policy show weekly # inspect a specific policy $ plakar policy set weekly per-week=2 # update a parameter $ plakar policy unset weekly per-week # remove a parameter $ plakar policy rm weekly # delete a policy Reclaiming storage after pruning # Pruning removes snapshots, but does not immediately free storage. Because Plakar deduplicates data across snapshots, the underlying chunks and packfiles remain until plakar maintenance runs (also consider the maintenance grace period).\nAfter pruning, run plakar maintenance to reclaim the freed space:\n$ plakar maintenance See also # plakar policy How maintenance works ","date":"24 April 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/using-plakar-prune/","section":"Docs","summary":"Remove old snapshots from a Kloset store using age, tags, or retention policies.","title":"Pruning snapshots","type":"docs"},{"content":" Proton Drive # The Proton Drive integration package for Plakar allows you to back up and restore data to and from Proton Drive cloud storage, as well as host Kloset stores directly within Proton Drive. It is built on top of Rclone, a command-line program to manage files on cloud storage, and supports Proton Drive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Proton Drive remote must be configured. Typical use cases\nCold backup of Proton Drive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Proton Drive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with Proton Drive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Proton Drive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter your credentials. Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Proton Drive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Proton Drive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Proton Drive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # Hosting Kloset Stores on Proton Drive See the section Limitations and Considerations for important information about hosting Kloset stores on Proton Drive.\nThe Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # Restoring to Proton Drive See the section Limitations and considerations for important information about restoring to Proton Drive.\nThe Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # Proton Drive API has rate limits, heavy usage may require throttling. At the time of writing, Proton Drive support is in beta in Rclone and write operations are not supported. You will be able to back up from Proton Drive, but not restore to it or host Kloset stores on it until the issue is resolved. See also # Rclone documentation for Proton Drive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/main/integrations/protondrive/","section":"Docs","summary":"Back up and restore your Proton Drive with Plakar, and host Kloset stores in Proton Drive.","title":"Proton Drive","type":"docs"},{"content":" Proton Drive # The Proton Drive integration package for Plakar allows you to back up and restore data to and from Proton Drive cloud storage, as well as host Kloset stores directly within Proton Drive. It is built on top of Rclone, a command-line program to manage files on cloud storage, and supports Proton Drive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Proton Drive remote must be configured. Typical use cases\nCold backup of Proton Drive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Proton Drive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with Proton Drive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Proton Drive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter your credentials. Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Proton Drive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Proton Drive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Proton Drive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # Hosting Kloset Stores on Proton Drive See the section Limitations and Considerations for important information about hosting Kloset stores on Proton Drive.\nThe Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # Restoring to Proton Drive See the section Limitations and considerations for important information about restoring to Proton Drive.\nThe Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # Proton Drive API has rate limits, heavy usage may require throttling. At the time of writing, Proton Drive support is in beta in Rclone and write operations are not supported. You will be able to back up from Proton Drive, but not restore to it or host Kloset stores on it until the issue is resolved. See also # Rclone documentation for Proton Drive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/integrations/protondrive/","section":"Docs","summary":"Back up and restore your Proton Drive with Plakar, and host Kloset stores in Proton Drive.","title":"Proton Drive","type":"docs"},{"content":" Proton Drive # The Proton Drive integration package for Plakar allows you to back up and restore data to and from Proton Drive cloud storage, as well as host Kloset stores directly within Proton Drive. It is built on top of Rclone, a command-line program to manage files on cloud storage, and supports Proton Drive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Proton Drive remote must be configured. Typical use cases\nCold backup of Proton Drive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Proton Drive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with Proton Drive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Proton Drive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter your credentials. Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Proton Drive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Proton Drive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Proton Drive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # Hosting Kloset Stores on Proton Drive See the section Limitations and Considerations for important information about hosting Kloset stores on Proton Drive.\nThe Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # Restoring to Proton Drive See the section Limitations and considerations for important information about restoring to Proton Drive.\nThe Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # Proton Drive API has rate limits, heavy usage may require throttling. At the time of writing, Proton Drive support is in beta in Rclone and write operations are not supported. You will be able to back up from Proton Drive, but not restore to it or host Kloset stores on it until the issue is resolved. See also # Rclone documentation for Proton Drive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/integrations/protondrive/","section":"Docs","summary":"Back up and restore your Proton Drive with Plakar, and host Kloset stores in Proton Drive.","title":"Proton Drive","type":"docs"},{"content":" Proton Drive # The Proton Drive integration package for Plakar allows you to back up and restore data to and from Proton Drive cloud storage, as well as host Kloset stores directly within Proton Drive. It is built on top of Rclone, a command-line program to manage files on cloud storage, and supports Proton Drive as one of its many backends.\nThe Rclone integration package for Plakar provides three connectors:\nConnector type Description Storage connector Host a Kloset store inside a Rclone remote. Source connector Back up a Rclone remote into a Kloset store. Destination connector Restore data from a Kloset store into a Rclone remote. Requirements\nRclone must be installed, and at least one Proton Drive remote must be configured. Typical use cases\nCold backup of Proton Drive folders Long-term archiving and disaster recovery Portable export and vendor escape to other platforms Installation # To interact with Proton Drive, you need to install the Rclone Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Rclone package:\n$ plakar pkg add rclone Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build rclone A package archive will be created in the current directory (e.g., rclone_v1.0.0_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./rclone_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nGenerate Rclone configuration # Install Rclone on your system by following the instructions at https://rclone.org/install/.\nThen, run the following command to configure Rclone with Proton Drive:\n$ rclone config You will be guided through a series of prompts to set up a new remote for Proton Drive.\nFor Rclone v1.72.1, the configuration flow is as follows:\nChoose n to create a new remote. Name the remote (e.g., mydrive). Enter your credentials. Validate the remote configuration. To verify that the remote is configured, run:\n$ rclone config show mydrive And to verify you have access to your Proton Drive files, run:\n$ rclone ls mydrive: The output should list the files and folders in your Proton Drive.\nConnectors # The Rclone package provides storage, source, and destination connectors to interact with Proton Drive via Rclone.\nYou can use any combination of these connectors together with other supported Plakar connectors.\nStorage connector # Hosting Kloset Stores on Proton Drive See the section Limitations and Considerations for important information about hosting Kloset stores on Proton Drive.\nThe Plakar Rclone package provides a storage connector to host Kloset stores on Rclone remotes.\nflowchart LR Source[\"Source data\"] Plakar[\"Plakar\"] Via[\"Store snapshot viaRclone storage connector\"] subgraph Store[\"Rclone Remote\"] Kloset[\"Kloset Store\"] end Source --\u003e Plakar --\u003e Via --\u003e Kloset Configure # # Import the rclone configuration as a storage configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar store import -rclone mydrive # Initialize the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; create # List snapshots in the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; ls # Verify integrity of the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; check # Back up a local folder to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup /etc # Back up a source configured in Plakar to the Kloset store $ plakar at \u0026#34;@mydrive\u0026#34; backup \u0026#34;@my_source\u0026#34; Options # These options can be set when configuring the storage connector with plakar store add or plakar store set:\nOption Purpose passphrase The Kloset store passphrase Source connector # The Plakar Rclone package provides a source connector to back up remote directories accessible via Rclone.\nflowchart LR subgraph Source[\"Rclone Remote\"] FS[\"Data\"] end Plakar[\"Plakar\"] Via[\"Retrieve data viaRclone source connector\"] Store[\"Kloset Store\"] FS --\u003e Via --\u003e Plakar --\u003e Store Configure # # Import the rclone configuration as a source configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar source import -rclone mydrive # Back up the remote directory to the Kloset store on the filesystem $ plakar at /var/backups backup \u0026#34;@mydrive\u0026#34; # Or back up the remote directory to a Kloset store configured with \u0026#34;plakar store add\u0026#34; $ plakar at \u0026#34;@store\u0026#34; backup \u0026#34;@mydrive\u0026#34; Options # The Rclone source connector doesn\u0026rsquo;t support any specific options.\nDestination connector # Restoring to Proton Drive See the section Limitations and considerations for important information about restoring to Proton Drive.\nThe Rclone package provides a destination connector to restore snapshots to remote directories reachable over Rclone.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Via[\"Push data viaRclone destination connector\"] subgraph Destination[\"Rclone Remote\"] FS[\"Data\"] end Store --\u003e Plakar --\u003e Via --\u003e FS Configure # # Import the rclone configuration as a destination configuration. # Replace \u0026#34;mydrive\u0026#34; with your Rclone remote name. $ rclone config show | plakar destination import -rclone mydrive # Restore a snapshot from a filesystem-hosted Kloset store to the Rclone remote $ plakar at /var/backups restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; # Or restore a snapshot from the Kloset store configured with \u0026#34;plakar store add store …\u0026#34; $ plakar at \u0026#34;@store\u0026#34; restore -to \u0026#34;@mydrive\u0026#34; \u0026lt;snapshot_id\u0026gt; Options # The Rclone destination connector doesn\u0026rsquo;t support any specific options.\nLimitations and considerations # Proton Drive API has rate limits, heavy usage may require throttling. At the time of writing, Proton Drive support is in beta in Rclone and write operations are not supported. You will be able to back up from Proton Drive, but not restore to it or host Kloset stores on it until the issue is resolved. See also # Rclone documentation for Proton Drive ","date":"20 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/integrations/protondrive/","section":"Docs","summary":"Back up and restore your Proton Drive with Plakar, and host Kloset stores in Proton Drive.","title":"Proton Drive","type":"docs"},{"content":" MySQL # MySQL offers flexible backup strategies to suit different requirements. The most common approach uses mysqldump to create logical backups - SQL dumps that can be easily restored across different MySQL versions and platforms.\nFor a comprehensive understanding of MySQL backup strategies, we recommend reading the official MySQL documentation on Backup and Recovery.\nLogical backups with SQL dumps Back up MySQL databases using mysqldump and restore from these backups.\nPhysical backups Perform physical backups of MySQL databases using file copy or Percona XtraBackup with Plakar.\n","date":"18 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/mysql/","section":"Docs","summary":"Guides on backing up and restoring MySQL database","title":"MySQL","type":"docs"},{"content":" Proxmox # The Proxmox integration wraps Proxmox\u0026rsquo;s native vzdump tool to back up virtual machines and containers into a Kloset store. Plakar handles encryption, deduplication, and snapshot management on top of the archives that vzdump produces.\nThe integration provides two connectors:\nConnector type Description Source connector Back up VMs and containers from a Proxmox node into a Kloset store. Destination connector Restore snapshots from a Kloset store back to a Proxmox node. Requirements\nProxmox VE with vzdump available on the node. SSH access to the Proxmox node with appropriate permissions (remote mode). Plakar v1.1.0-beta or later. Typical use cases\nEncrypted, deduplicated backups of Proxmox VMs and containers. Cross-cluster VM migration and restore. Long-term archiving to object storage (S3, Scaleway, OVH, Exoscale). Centralized backup of multiple hypervisors from a single Plakar instance. Installation # The Proxmox integration is distributed as a Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Proxmox package:\n$ plakar pkg add proxmox Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build proxmox A package archive will be created in the current directory (e.g., proxmox_v1.1.0-rc.1_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./proxmox_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nOperating modes # The Proxmox integration supports two operating modes.\nLocal mode — Plakar runs directly on the Proxmox node alongside vzdump:\nProxmox node ├ vzdump └ plakar Remote mode — Plakar runs on a separate machine and connects to the Proxmox node over SSH:\nBackup server │ │ SSH ▼ Proxmox node └ vzdump Remote mode allows a single Plakar instance to back up multiple hypervisors. The operating mode is set via the mode option when configuring a source or destination.\nSource connector # The source connector invokes vzdump on the Proxmox node, collects the resulting archive, and ingests it into a Kloset store with encryption and deduplication.\nflowchart LR subgraph Source[\"Proxmox Node\"] Vzdump[\"vzdump\"] end Plakar[\"Plakar\"] Via[\"Retrieve archive viaSSH\"] Transform[\"Encrypt \u0026 deduplicate\"] Store[\"Kloset Store\"] Vzdump --\u003e Via --\u003e Plakar --\u003e Transform --\u003e Store Configure # Register a Proxmox source:\n$ plakar source add myProxmox proxmox+backup://10.0.0.10 \\ mode=remote \\ conn_username=root \\ conn_identity_file=/path/to/key \\ conn_method=identity Back up workloads # Back up a single virtual machine by ID:\n$ plakar backup -o vmid=101 @myProxmox Back up all machines in a pool:\n$ plakar backup -o pool=prod @myProxmox Back up the entire hypervisor:\n$ plakar backup -o all @myProxmox Options # Option Required Description location Yes Proxmox node address. Format: proxmox+backup://\u0026lt;host\u0026gt; mode Yes Operating mode. local or remote. conn_username Yes (remote) SSH username on the Proxmox node. conn_identity_file No Path to the SSH private key. Required when conn_method=identity. conn_method No Authentication method. identity for key-based auth. Destination connector # The destination connector uploads a vzdump archive from a Plakar snapshot to a Proxmox node and restores it using native Proxmox tools: qmrestore for virtual machines and pct restore for containers.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Transform[\"Decrypt \u0026 reconstruct\"] Via[\"Push archive viaSSH\"] subgraph Destination[\"Proxmox Node\"] Restore[\"qmrestore / pct restore\"] end Store --\u003e Plakar --\u003e Transform --\u003e Via --\u003e Restore Configure # Register a Proxmox destination:\n$ plakar destination add myProxmox \\ proxmox+backup://10.0.0.10 \\ mode=remote \\ conn_username=root \\ conn_identity_file=/path/to/key \\ conn_method=identity Restore workloads # Restore all machines in a snapshot:\n$ plakar restore -to @myProxmox \u0026lt;snapshot_id\u0026gt; Restore a single VM from a snapshot containing multiple machines:\n$ plakar restore -to @myProxmox \u0026lt;snapshot_id\u0026gt;:/backup/qemu/101_myvm Options # Option Required Description location Yes Proxmox node address. Format: proxmox+backup://\u0026lt;host\u0026gt; mode Yes Operating mode. local or remote. conn_username Yes (remote) SSH username on the Proxmox node. conn_identity_file No Path to the SSH private key. Required when conn_method=identity. conn_method No Authentication method. identity for key-based auth. Limitations and scope # What is captured during backup\nVirtual machine and container disk images, as produced by vzdump. Proxmox configuration associated with each backed-up workload. Snapshot consistency\nPlakar relies on vzdump for snapshot consistency. For live machines, vzdump uses QEMU guest agent or suspend-resume to produce a consistent backup. Refer to the Proxmox vzdump documentation for details on consistency modes.\nSee also # Proxmox VE Backup and Restore Backing up Proxmox with Plakar: a third-party integration built in a few days ","date":"30 March 2026","externalUrl":null,"permalink":"/docs/main/integrations/proxmox/","section":"Docs","summary":"Back up and restore Proxmox virtual machines and containers with Plakar.","title":"Proxmox","type":"docs"},{"content":" Proxmox # The Proxmox integration wraps Proxmox\u0026rsquo;s native vzdump tool to back up virtual machines and containers into a Kloset store. Plakar handles encryption, deduplication, and snapshot management on top of the archives that vzdump produces.\nThe integration provides two connectors:\nConnector type Description Source connector Back up VMs and containers from a Proxmox node into a Kloset store. Destination connector Restore snapshots from a Kloset store back to a Proxmox node. Requirements\nProxmox VE with vzdump available on the node. SSH access to the Proxmox node with appropriate permissions (remote mode). Plakar v1.1.0-beta or later. Typical use cases\nEncrypted, deduplicated backups of Proxmox VMs and containers. Cross-cluster VM migration and restore. Long-term archiving to object storage (S3, Scaleway, OVH, Exoscale). Centralized backup of multiple hypervisors from a single Plakar instance. Installation # The Proxmox integration is distributed as a Plakar package.\nPre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Proxmox package:\n$ plakar pkg add proxmox Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build proxmox A package archive will be created in the current directory (e.g., proxmox_v1.1.0-rc.1_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./proxmox_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nOperating modes # The Proxmox integration supports two operating modes.\nLocal mode — Plakar runs directly on the Proxmox node alongside vzdump:\nProxmox node ├ vzdump └ plakar Remote mode — Plakar runs on a separate machine and connects to the Proxmox node over SSH:\nBackup server │ │ SSH ▼ Proxmox node └ vzdump Remote mode allows a single Plakar instance to back up multiple hypervisors. The operating mode is set via the mode option when configuring a source or destination.\nSource connector # The source connector invokes vzdump on the Proxmox node, collects the resulting archive, and ingests it into a Kloset store with encryption and deduplication.\nflowchart LR subgraph Source[\"Proxmox Node\"] Vzdump[\"vzdump\"] end Plakar[\"Plakar\"] Via[\"Retrieve archive viaSSH\"] Transform[\"Encrypt \u0026 deduplicate\"] Store[\"Kloset Store\"] Vzdump --\u003e Via --\u003e Plakar --\u003e Transform --\u003e Store Configure # Register a Proxmox source:\n$ plakar source add myProxmox proxmox+backup://10.0.0.10 \\ mode=remote \\ conn_username=root \\ conn_identity_file=/path/to/key \\ conn_method=identity Back up workloads # Back up a single virtual machine by ID:\n$ plakar backup -o vmid=101 @myProxmox Back up all machines in a pool:\n$ plakar backup -o pool=prod @myProxmox Back up the entire hypervisor:\n$ plakar backup -o all @myProxmox Options # Option Required Description location Yes Proxmox node address. Format: proxmox+backup://\u0026lt;host\u0026gt; mode Yes Operating mode. local or remote. conn_username Yes (remote) SSH username on the Proxmox node. conn_identity_file No Path to the SSH private key. Required when conn_method=identity. conn_method No Authentication method. identity for key-based auth. Destination connector # The destination connector uploads a vzdump archive from a Plakar snapshot to a Proxmox node and restores it using native Proxmox tools: qmrestore for virtual machines and pct restore for containers.\nflowchart LR Store[\"Kloset Store\"] Plakar[\"Plakar\"] Transform[\"Decrypt \u0026 reconstruct\"] Via[\"Push archive viaSSH\"] subgraph Destination[\"Proxmox Node\"] Restore[\"qmrestore / pct restore\"] end Store --\u003e Plakar --\u003e Transform --\u003e Via --\u003e Restore Configure # Register a Proxmox destination:\n$ plakar destination add myProxmox \\ proxmox+backup://10.0.0.10 \\ mode=remote \\ conn_username=root \\ conn_identity_file=/path/to/key \\ conn_method=identity Restore workloads # Restore all machines in a snapshot:\n$ plakar restore -to @myProxmox \u0026lt;snapshot_id\u0026gt; Restore a single VM from a snapshot containing multiple machines:\n$ plakar restore -to @myProxmox \u0026lt;snapshot_id\u0026gt;:/backup/qemu/101_myvm Options # Option Required Description location Yes Proxmox node address. Format: proxmox+backup://\u0026lt;host\u0026gt; mode Yes Operating mode. local or remote. conn_username Yes (remote) SSH username on the Proxmox node. conn_identity_file No Path to the SSH private key. Required when conn_method=identity. conn_method No Authentication method. identity for key-based auth. Limitations and scope # What is captured during backup\nVirtual machine and container disk images, as produced by vzdump. Proxmox configuration associated with each backed-up workload. Snapshot consistency\nPlakar relies on vzdump for snapshot consistency. For live machines, vzdump uses QEMU guest agent or suspend-resume to produce a consistent backup. Refer to the Proxmox vzdump documentation for details on consistency modes.\nSee also # Proxmox VE Backup and Restore Backing up Proxmox with Plakar: a third-party integration built in a few days ","date":"30 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/integrations/proxmox/","section":"Docs","summary":"Back up and restore Proxmox virtual machines and containers with Plakar.","title":"Proxmox","type":"docs"},{"content":" PostgreSQL # There are several ways to back up PostgreSQL databases, via dumps, file system-level backups, or continuous archiving. Each method has its own advantages and use cases.\nAs a starting point, we strongly encourage you to read the official PostgreSQL documentation on Backup and Restore.\nThen, follow the appropriate guide below for more specific instructions to manage PostgreSQL backups with Plakar.\nLogical backups with SQL dumps Back up PostgreSQL databases using pg_dump and restore from these backups.\nPhysical backups with pg_basebackup How to perform physical backups of a PostgreSQL cluster using pg_basebackup, and store them with Plakar.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/postgres/","section":"Docs","summary":"Guides on backing up and restoring PostgreSQL databases","title":"PostgreSQL","type":"docs"},{"content":" MySQL # MySQL offers flexible backup strategies to suit different requirements. The most common approach uses mysqldump to create logical backups - SQL dumps that can be easily restored across different MySQL versions and platforms.\nFor a comprehensive understanding of MySQL backup strategies, we recommend reading the official MySQL documentation on Backup and Recovery.\nLogical backups with SQL dumps Back up MySQL and MariaDB databases using the Plakar MySQL integration and restore them.\nPhysical backups Perform physical backups of MySQL databases using file copy or Percona XtraBackup with Plakar.\n","date":"18 March 2026","externalUrl":null,"permalink":"/docs/main/guides/mysql/","section":"Docs","summary":"Guides on backing up and restoring MySQL database","title":"MySQL","type":"docs"},{"content":" MySQL # MySQL offers flexible backup strategies to suit different requirements. The most common approach uses mysqldump to create logical backups - SQL dumps that can be easily restored across different MySQL versions and platforms.\nFor a comprehensive understanding of MySQL backup strategies, we recommend reading the official MySQL documentation on Backup and Recovery.\nLogical backups with SQL dumps Back up MySQL and MariaDB databases using the Plakar MySQL integration and restore them.\nPhysical backups Perform physical backups of MySQL databases using file copy or Percona XtraBackup with Plakar.\n","date":"18 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/mysql/","section":"Docs","summary":"Guides on backing up and restoring MySQL database","title":"MySQL","type":"docs"},{"content":" Kubernetes # The Kubernetes integration backs up cluster resources and persistent volumes. It provides two connectors accessible via two URI schemes:\nURI scheme What it backs up k8s:// Kubernetes manifests and resource state across namespaces. k8s+csi:// Persistent volume contents via CSI driver snapshots. Requirements\nPlakar v1.1.0-beta or later. kubectl proxy running and accessible. A CSI driver with snapshot support and a configured VolumeSnapshotClass for CSI-based PVC backups. Typical use cases\nNamespace or resource-level restore from manifest snapshots. Incident investigation by browsing cluster state at a point in time. Persistent volume backup and cross-environment data portability. Installation # Pre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Kubernetes package:\n$ plakar pkg add k8s Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build k8s A package archive will be created in the current directory (e.g., k8s_v1.1.0-rc.1_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./k8s_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nManifest backup and restore # The k8s:// connector fetches all Kubernetes resources across the cluster and stores them as a Plakar snapshot. This enables browsing, diffing, and restoring cluster configuration at any level of granularity — full cluster, single namespace, or individual resource.\nSnapshots include resource status metadata, making them useful for incident investigation — you can browse the Plakar UI to inspect the state of deployments, nodes, and other resources at any point in time.\nflowchart LR subgraph Source[\"Kubernetes Cluster\"] API[\"API Server\"] end Plakar[\"Plakar\"] Via[\"Fetch manifests viakubectl proxy\"] Transform[\"Encrypt \u0026 deduplicate\"] Store[\"Kloset Store\"] API --\u003e Via --\u003e Plakar --\u003e Transform --\u003e Store Start a proxy to the cluster:\n$ kubectl proxy Starting to serve on 127.0.0.1:8001 Back up manifests # Back up all resources across the entire cluster:\n$ plakar backup k8s://localhost:8001 Back up resources in a specific namespace:\n$ plakar backup k8s://localhost:8001/foo Restore manifests # Restore all StatefulSet resources in the foo namespace:\n$ plakar restore -to k8s://localhost:8001 abcd:/foo/apps/StatefulSet Persistent volume backup and restore (CSI) # The k8s+csi:// connector backs up the contents of persistent volumes by creating a VolumeSnapshot, mounting it in a temporary pod running a helper importer, and ingesting the data into a Kloset store. The snapshot is deleted from the cluster once ingestion completes.\nRestore works in reverse: data is written into a target PVC using the same helper pod mechanism. The target can be an existing PVC or a freshly created one.\nflowchart LR subgraph Source[\"Kubernetes Cluster\"] PVC[\"PVC\"] Snap[\"VolumeSnapshot\"] PVC --\u003e Snap end Plakar[\"Plakar\"] Via[\"Ingest viahelper pod\"] Transform[\"Encrypt \u0026 deduplicate\"] Store[\"Kloset Store\"] Snap --\u003e Via --\u003e Plakar --\u003e Transform --\u003e Store Back up a PVC # $ plakar backup -o volume_snapshot_class=my-snapclass k8s+csi://localhost:8001/storage/my-pvc Restore a PVC # Restore into a new, empty PVC:\n$ kubectl create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pristine namespace: storage spec: resources: requests: storage: 1Gi accessModes: - ReadWriteOnce $ plakar restore -to k8s+csi://localhost:8001/storage/pristine abcdef: Restore into an existing PVC by referencing it in the same way. The target PVC must have sufficient capacity.\nOptions # Option Required Description volume_snapshot_class Yes Name of the VolumeSnapshotClass to use for CSI snapshots. kubelet_image No Container image for the helper pod. Defaults to a recent kubelet image. Limitations and scope # What is captured\nAll Kubernetes resource manifests and status metadata (k8s://). Persistent volume contents for CSI-backed PVCs (k8s+csi://). What is not captured\nNon-CSI volumes are not yet supported for PVC backups. Node-level configuration (OS, kubelet config, network setup). In-flight workload state (open connections, in-memory data). Snapshot consistency\nManifest snapshots reflect the state of the API server at the time of backup. For PVCs, consistency depends on the CSI driver and whether the workload was quiesced before the snapshot was taken.\nSee also # Kubernetes integration demo etcd integration Kubernetes documentation — VolumeSnapshots ","date":"2 April 2026","externalUrl":null,"permalink":"/docs/main/integrations/kubernetes/","section":"Docs","summary":"Back up and restore Kubernetes resources and persistent volumes with Plakar.","title":"Kubernetes","type":"docs"},{"content":" Kubernetes # The Kubernetes integration backs up cluster resources and persistent volumes. It provides two connectors accessible via two URI schemes:\nURI scheme What it backs up k8s:// Kubernetes manifests and resource state across namespaces. k8s+csi:// Persistent volume contents via CSI driver snapshots. Requirements\nPlakar v1.1.0-beta or later. kubectl proxy running and accessible. A CSI driver with snapshot support and a configured VolumeSnapshotClass for CSI-based PVC backups. Typical use cases\nNamespace or resource-level restore from manifest snapshots. Incident investigation by browsing cluster state at a point in time. Persistent volume backup and cross-environment data portability. Installation # Pre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the Kubernetes package:\n$ plakar pkg add k8s Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build k8s A package archive will be created in the current directory (e.g., k8s_v1.1.0-rc.1_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./k8s_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nManifest backup and restore # The k8s:// connector fetches all Kubernetes resources across the cluster and stores them as a Plakar snapshot. This enables browsing, diffing, and restoring cluster configuration at any level of granularity — full cluster, single namespace, or individual resource.\nSnapshots include resource status metadata, making them useful for incident investigation — you can browse the Plakar UI to inspect the state of deployments, nodes, and other resources at any point in time.\nflowchart LR subgraph Source[\"Kubernetes Cluster\"] API[\"API Server\"] end Plakar[\"Plakar\"] Via[\"Fetch manifests viakubectl proxy\"] Transform[\"Encrypt \u0026 deduplicate\"] Store[\"Kloset Store\"] API --\u003e Via --\u003e Plakar --\u003e Transform --\u003e Store Start a proxy to the cluster:\n$ kubectl proxy Starting to serve on 127.0.0.1:8001 Back up manifests # Back up all resources across the entire cluster:\n$ plakar backup k8s://localhost:8001 Back up resources in a specific namespace:\n$ plakar backup k8s://localhost:8001/foo Restore manifests # Restore all StatefulSet resources in the foo namespace:\n$ plakar restore -to k8s://localhost:8001 abcd:/foo/apps/StatefulSet Persistent volume backup and restore (CSI) # The k8s+csi:// connector backs up the contents of persistent volumes by creating a VolumeSnapshot, mounting it in a temporary pod running a helper importer, and ingesting the data into a Kloset store. The snapshot is deleted from the cluster once ingestion completes.\nRestore works in reverse: data is written into a target PVC using the same helper pod mechanism. The target can be an existing PVC or a freshly created one.\nflowchart LR subgraph Source[\"Kubernetes Cluster\"] PVC[\"PVC\"] Snap[\"VolumeSnapshot\"] PVC --\u003e Snap end Plakar[\"Plakar\"] Via[\"Ingest viahelper pod\"] Transform[\"Encrypt \u0026 deduplicate\"] Store[\"Kloset Store\"] Snap --\u003e Via --\u003e Plakar --\u003e Transform --\u003e Store Back up a PVC # $ plakar backup -o volume_snapshot_class=my-snapclass k8s+csi://localhost:8001/storage/my-pvc Restore a PVC # Restore into a new, empty PVC:\n$ kubectl create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pristine namespace: storage spec: resources: requests: storage: 1Gi accessModes: - ReadWriteOnce $ plakar restore -to k8s+csi://localhost:8001/storage/pristine abcdef: Restore into an existing PVC by referencing it in the same way. The target PVC must have sufficient capacity.\nOptions # Option Required Description volume_snapshot_class Yes Name of the VolumeSnapshotClass to use for CSI snapshots. kubelet_image No Container image for the helper pod. Defaults to a recent kubelet image. Limitations and scope # What is captured\nAll Kubernetes resource manifests and status metadata (k8s://). Persistent volume contents for CSI-backed PVCs (k8s+csi://). What is not captured\nNon-CSI volumes are not yet supported for PVC backups. Node-level configuration (OS, kubelet config, network setup). In-flight workload state (open connections, in-memory data). Snapshot consistency\nManifest snapshots reflect the state of the API server at the time of backup. For PVCs, consistency depends on the CSI driver and whether the workload was quiesced before the snapshot was taken.\nSee also # Kubernetes integration demo etcd integration Kubernetes documentation — VolumeSnapshots ","date":"2 April 2026","externalUrl":null,"permalink":"/docs/v1.1.0/integrations/kubernetes/","section":"Docs","summary":"Back up and restore Kubernetes resources and persistent volumes with Plakar.","title":"Kubernetes","type":"docs"},{"content":" OVHcloud # Guides on running backups in OVHcloud\nUsing OVHcloud VPS as a Dedicated Backup Server Automate backups of OVHcloud servers to Object Storage using a dedicated VPS.\nBacking Up an OVHcloud Managed PostgreSQL Database Backing up an OVHcloud Managed PostgreSQL database to Object Storage using pg_dump and Plakar.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/ovhcloud/","section":"Docs","summary":"Guides on running backups in OVHcloud","title":"OVHcloud","type":"docs"},{"content":" PostgreSQL # There are several ways to back up PostgreSQL databases, via dumps, file system-level backups, or continuous archiving. Each method has its own advantages and use cases.\nAs a starting point, we strongly encourage you to read the official PostgreSQL documentation on Backup and Restore.\nThen, follow the appropriate guide below for more specific instructions to manage PostgreSQL backups with Plakar.\nLogical backups with pg_dump Back up PostgreSQL databases using the Plakar PostgreSQL integration and restore them.\nPhysical backups with pg_basebackup Back up a PostgreSQL cluster using the Plakar PostgreSQL integration and restore it.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/guides/postgres/","section":"Docs","summary":"Guides on backing up and restoring PostgreSQL databases","title":"PostgreSQL","type":"docs"},{"content":" PostgreSQL # There are several ways to back up PostgreSQL databases, via dumps, file system-level backups, or continuous archiving. Each method has its own advantages and use cases.\nAs a starting point, we strongly encourage you to read the official PostgreSQL documentation on Backup and Restore.\nThen, follow the appropriate guide below for more specific instructions to manage PostgreSQL backups with Plakar.\nLogical backups with pg_dump Back up PostgreSQL databases using the Plakar PostgreSQL integration and restore them.\nPhysical backups with pg_basebackup Back up a PostgreSQL cluster using the Plakar PostgreSQL integration and restore it.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/postgres/","section":"Docs","summary":"Guides on backing up and restoring PostgreSQL databases","title":"PostgreSQL","type":"docs"},{"content":" etcd # The etcd integration takes snapshots of an etcd cluster and stores them in a Kloset store with encryption and deduplication.\nConnector type Description Source connector Back up an etcd cluster into a Kloset store. Requirements\nPlakar v1.1.0-beta or later. Network access to at least one etcd node. Typical use cases\nDisaster recovery for etcd clusters. Point-in-time snapshots of Kubernetes cluster state (for clusters using etcd as their backing store). Installation # Pre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the etcd package:\n$ plakar pkg add etcd Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build etcd A package archive will be created in the current directory (e.g., etcd_v1.1.0-rc.1_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./etcd_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nSource connector # The source connector connects to an etcd node, takes a snapshot of the cluster, and ingests it into a Kloset store.\nflowchart LR subgraph Source[\"etcd Cluster\"] DB[\"Cluster state\"] end Plakar[\"Plakar\"] Via[\"Retrieve snapshot viaetcd API\"] Transform[\"Encrypt \u0026 deduplicate\"] Store[\"Kloset Store\"] DB --\u003e Via --\u003e Plakar --\u003e Transform --\u003e Store Back up a cluster # Back up by connecting to a node over HTTP:\n$ plakar backup etcd://node1:2379 Back up using HTTPS with authentication:\n$ plakar backup -o username=myuser -o password=secret etcd+https://node1:2379 Back up by specifying multiple nodes:\n$ plakar backup -o endpoints=http://node1:2379,http://node2:2379 etcd:// Options # Option Required Description location Yes etcd node address. Use etcd://, etcd+http://, or etcd+https:// followed by the hostname and optional port. endpoints No Comma-separated list of node endpoints. Takes priority over location when set. username No etcd username. password No etcd password. Restoring # The etcd API does not support restoring a snapshot directly. To restore, first retrieve the snapshot from the Kloset store to disk, then use etcdutl to provision a new etcd data directory from it.\n$ plakar at /var/backups restore -to ./etcd-snapshot \u0026lt;snapshot_id\u0026gt; Then follow the upstream etcd recovery procedure to bring the cluster back up.\nRefer to the etcd recovery documentation for full restore instructions.\nSee also # etcd documentation Kubernetes integration ","date":"2 April 2026","externalUrl":null,"permalink":"/docs/main/integrations/etcd/","section":"Docs","summary":"Back up etcd clusters with Plakar.","title":"etcd","type":"docs"},{"content":" etcd # The etcd integration takes snapshots of an etcd cluster and stores them in a Kloset store with encryption and deduplication.\nConnector type Description Source connector Back up an etcd cluster into a Kloset store. Requirements\nPlakar v1.1.0-beta or later. Network access to at least one etcd node. Typical use cases\nDisaster recovery for etcd clusters. Point-in-time snapshots of Kubernetes cluster state (for clusters using etcd as their backing store). Installation # Pre-built package Building from source Pre-compiled packages are available for common platforms and provide the simplest installation method.\nLogging In Pre-built packages require Plakar authentication. See Logging in to Plakar for details.\nInstall the etcd package:\n$ plakar pkg add etcd Verify installation:\n$ plakar pkg list Source builds are useful when pre-built packages are unavailable or when customization is required.\nPrerequisites:\nGo toolchain compatible with your Plakar version Build the package:\n$ plakar pkg build etcd A package archive will be created in the current directory (e.g., etcd_v1.1.0-rc.1_darwin_arm64.ptar).\nInstall the package:\n$ plakar pkg add ./etcd_v1.0.0_darwin_arm64.ptar Verify installation:\n$ plakar pkg list To list, upgrade, or remove the package, see managing packages guide.\nSource connector # The source connector connects to an etcd node, takes a snapshot of the cluster, and ingests it into a Kloset store.\nflowchart LR subgraph Source[\"etcd Cluster\"] DB[\"Cluster state\"] end Plakar[\"Plakar\"] Via[\"Retrieve snapshot viaetcd API\"] Transform[\"Encrypt \u0026 deduplicate\"] Store[\"Kloset Store\"] DB --\u003e Via --\u003e Plakar --\u003e Transform --\u003e Store Back up a cluster # Back up by connecting to a node over HTTP:\n$ plakar backup etcd://node1:2379 Back up using HTTPS with authentication:\n$ plakar backup -o username=myuser -o password=secret etcd+https://node1:2379 Back up by specifying multiple nodes:\n$ plakar backup -o endpoints=http://node1:2379,http://node2:2379 etcd:// Options # Option Required Description location Yes etcd node address. Use etcd://, etcd+http://, or etcd+https:// followed by the hostname and optional port. endpoints No Comma-separated list of node endpoints. Takes priority over location when set. username No etcd username. password No etcd password. Restoring # The etcd API does not support restoring a snapshot directly. To restore, first retrieve the snapshot from the Kloset store to disk, then use etcdutl to provision a new etcd data directory from it.\n$ plakar at /var/backups restore -to ./etcd-snapshot \u0026lt;snapshot_id\u0026gt; Then follow the upstream etcd recovery procedure to bring the cluster back up.\nRefer to the etcd recovery documentation for full restore instructions.\nSee also # etcd documentation Kubernetes integration ","date":"2 April 2026","externalUrl":null,"permalink":"/docs/v1.1.0/integrations/etcd/","section":"Docs","summary":"Back up etcd clusters with Plakar.","title":"etcd","type":"docs"},{"content":" Exoscale # Guides on running backups in OVHcloud\nUsing Exoscale Compute as a Dedicated Backup Server Back up Exoscale compute servers to Exoscale Object Storage using a dedicated compute instance.\nBack Up an Exoscale Managed MySQL Database Back up an Exoscale Managed MySQL database to Exoscale Object Storage using mysqldump and Plakar\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/guides/exoscale/","section":"Docs","summary":"Guides on running backups in Exoscale","title":"Exoscale","type":"docs"},{"content":" OVHcloud # Guides on running backups in OVHcloud\nUsing OVHcloud VPS as a Dedicated Backup Server Automate backups of OVHcloud servers to Object Storage using a dedicated VPS.\nBacking Up an OVHcloud Managed PostgreSQL Database Backing up an OVHcloud Managed PostgreSQL database to Object Storage using pg_dump and Plakar.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/guides/ovhcloud/","section":"Docs","summary":"Guides on running backups in OVHcloud","title":"OVHcloud","type":"docs"},{"content":" OVHcloud # Guides on running backups in OVHcloud\nUsing OVHcloud VPS as a Dedicated Backup Server Automate backups of OVHcloud servers to Object Storage using a dedicated VPS.\nBacking Up an OVHcloud Managed PostgreSQL Database Backing up an OVHcloud Managed PostgreSQL database to Object Storage using pg_dump and Plakar.\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/ovhcloud/","section":"Docs","summary":"Guides on running backups in OVHcloud","title":"OVHcloud","type":"docs"},{"content":" Exoscale # Guides on running backups in OVHcloud\nUsing Exoscale Compute as a Dedicated Backup Server Back up Exoscale compute servers to Exoscale Object Storage using a dedicated compute instance.\nBack Up an Exoscale Managed MySQL Database Back up an Exoscale Managed MySQL database to Exoscale Object Storage using mysqldump and Plakar\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/main/guides/exoscale/","section":"Docs","summary":"Guides on running backups in Exoscale","title":"Exoscale","type":"docs"},{"content":" Exoscale # Guides on running backups in OVHcloud\nUsing Exoscale Compute as a Dedicated Backup Server Back up Exoscale compute servers to Exoscale Object Storage using a dedicated compute instance.\nBack Up an Exoscale Managed MySQL Database Back up an Exoscale Managed MySQL database to Exoscale Object Storage using mysqldump and Plakar\n","date":"19 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/guides/exoscale/","section":"Docs","summary":"Guides on running backups in Exoscale","title":"Exoscale","type":"docs"},{"content":"PLAKAR-ARCHIVE(1) General Commands Manual PLAKAR-ARCHIVE(1) NAME plakar-archive \u0026#x2014; Create an archive from a Plakar snapshot\nSYNOPSIS plakar archive [-format type] [-output archive] [-rebase] snapshotID:path DESCRIPTION The plakar archive command creates an archive of the given type from the contents at path of a specified Plakar snapshot, or all the files if no path is given.\nThe options are as follows:\n-format type Specify the archive format. Supported formats are: tar Creates a tar file. tarball Creates a compressed tar.gz file. zip Creates a zip archive. -output pathname Specify the output path for the archive file. If omitted, the archive is created with a default name based on the current date and time. -rebase Strip the leading path from archived files, useful for creating \u0026quot;flat\u0026quot; archives without nested directories. EXIT STATUS The plakar-archive utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Create a tarball of the entire snapshot:\n$ plakar archive -output backup.tar.gz -format tarball abc123 Create a zip archive of a specific directory within a snapshot:\n$ plakar archive -output dir.zip -format zip abc123:/var/www Archive with rebasing to remove directory structure:\n$ plakar archive -rebase -format tar abc123 SEE ALSO plakar(1), plakar-backup(1)\nPlakar May 5, 2026 PLAKAR-ARCHIVE(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-archive/","section":"Docs","summary":"Create an archive from a Plakar snapshot","title":"archive","type":"docs"},{"content":"PLAKAR-BACKUP(1) General Commands Manual PLAKAR-BACKUP(1) NAME plakar-backup \u0026#x2014; Create a new snapshot in a Kloset store\nSYNOPSIS plakar backup [-cache path] [-category category] [-check] [-dry-run] [-environment environment] [-force-timestamp timestamp] [-ignore pattern] [-ignore-file file] [-job job] [-name name] [-no-progress] [-no-xattr] [-o option=value] [-packfiles path] [-perimeter perimeter] [-tag tag] [place] DESCRIPTION The plakar backup command creates a new snapshot of place, or the current directory. Snapshots can be filtered to ignore specific files or directories based on patterns provided through options.\nplace can be either a path, an URI, or a label with the form \u0026#x201C;@name\u0026#x201D; to reference a source connector configured with plakar-source(1).\nThe options are as follows:\n-cache path Specify a path to store the vfs cache. Use the special value \u0026#x2018;no\u0026#x2019; to disable caching. Use the special value \u0026#x2018;vfs\u0026#x2019; to use the in-memory vfs cache (the default). -category category Set the snapshot category. -check Perform a full check on the backup after success. -dry-run Do not write a snapshot; instead, perform a dry run by outputting the list of files and directories that would be included in the backup. Respects all exclude patterns and other options, but makes no changes to the Kloset store. -environment environment Set the snapshot environment. -force-timestamp timestamp Specify a fixed timestamp (in ISO 8601 or relative human format) to use for the snapshot. Could be used to reimport an existing backup with the same timestamp. -ignore pattern Specify individual gitignore exclusion patterns to ignore files or directories in the backup. This option can be repeated. -ignore-file file Specify a file containing gitignore exclusion patterns, one per line, to ignore files or directories in the backup. -job job Name the snapshot job. -name name Name the snapshot. -no-progress Do not compute or display progress. By default, plakar backup does two passes on the source of the backup: one to compute the number of items, and a second for processing the items themselves. This flag disables the pass to compute the number of items. It is set implicitly for some importer connectors that don't support the two-passes. -no-xattr Skip extended attributes (xattrs) when creating the backup. -o option=value Can be used to pass extra arguments to the source connector. The given option takes precedence over the configuration file. -packfiles path Path where to put the temporary packfiles instead of building them in the default temporary directory. If the special value \u0026#x2018;memory\u0026#x2019; is specified then the packfiles are built in memory. -perimeter perimeter Set the snapshot perimeter. -tag tag Comma-separated list of tags to apply to the snapshot. ENVIRONMENT PLAKAR_TAGS Comma-separated list of tags to apply to the snapshot during backup. Overridden by the -tag command-line flag. EXIT STATUS The plakar-backup utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Create a snapshot of the current directory with two tags:\n$ plakar backup -tag daily-backup,production Ignore files using patterns in a given file:\n$ plakar backup -ignore-file ~/my-ignore-file /var/www or by using patterns specified inline:\n$ plakar backup -ignore \u0026quot;*.tmp\u0026quot; -ignore \u0026quot;*.log\u0026quot; /var/www Pass an option to the importer, in this case to don't traverse mount points:\n$ plakar backup -o dont_traverse_fs=true / SEE ALSO plakar(1), plakar-source(1)\nPlakar May 5, 2026 PLAKAR-BACKUP(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-backup/","section":"Docs","summary":"Create a new snapshot in a Kloset store","title":"backup","type":"docs"},{"content":"PLAKAR-CAT(1) General Commands Manual PLAKAR-CAT(1) NAME plakar-cat \u0026#x2014; Display file contents from a Plakar snapshot\nSYNOPSIS plakar cat [-decompress] [-highlight] snapshotID:path ... DESCRIPTION The plakar cat command outputs the contents of path within Plakar snapshots to the standard output. It can decompress compressed files and optionally apply syntax highlighting based on the file type.\nThe options are as follows:\n-decompress If set, Plakar attempts to decompress application/gzip files. -highlight Apply syntax highlighting to the output based on the file type. EXIT STATUS The plakar-cat utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Display a file's contents from a snapshot:\n$ plakar cat abc123:/etc/passwd Display a file with syntax highlighting:\n$ plakar cat -highlight abc123:/home/op/korpus/driver.sh SEE ALSO plakar(1), plakar-backup(1)\nPlakar May 5, 2026 PLAKAR-CAT(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-cat/","section":"Docs","summary":"Display file contents from a Plakar snapshot","title":"cat","type":"docs"},{"content":"PLAKAR-CHECK(1) General Commands Manual PLAKAR-CHECK(1) NAME plakar-check \u0026#x2014; Check data integrity in a Plakar repository\nSYNOPSIS plakar check [-fast] [-no-verify] [snapshotID:path ...] DESCRIPTION The plakar check command verifies the integrity of data in a Plakar repository. It checks the given paths inside the snapshots for consistency and validates file macs to ensure no corruption has occurred, or all the data in the repository if no snapshotID or location flags is given.\nIn addition to the flags described below, plakar check supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-fast Enable a faster check that skips mac verification. This option performs only structural validation without confirming data integrity. -no-verify Disable signature verification. This option allows to proceed with checking snapshot integrity regardless of an invalid snapshot signature. EXIT STATUS The plakar-check utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Perform a full integrity check on all snapshots:\n$ plakar check Perform a fast check on specific paths of two snapshot:\n$ plakar check -fast abc123:/etc/passwd def456:/var/www SEE ALSO plakar(1), plakar-query(7)\nPlakar May 5, 2025 PLAKAR-CHECK(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-check/","section":"Docs","summary":"Check data integrity in a Plakar repository","title":"check","type":"docs"},{"content":"PLAKAR-CREATE(1) General Commands Manual PLAKAR-CREATE(1) NAME plakar-create \u0026#x2014; Create a new Plakar repository\nSYNOPSIS plakar create [-plaintext] DESCRIPTION The plakar create command creates a new Plakar repository at the specified path which defaults to ~/.plakar.\nThe options are as follows:\n-plaintext Disable transparent encryption for the repository. If specified, the repository will not use encryption. ENVIRONMENT PLAKAR_PASSPHRASE Repository encryption password. EXIT STATUS The plakar-create utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1), plakar-backup(1)\nPlakar May 5, 2026 PLAKAR-CREATE(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-create/","section":"Docs","summary":"Create a new Plakar repository","title":"create","type":"docs"},{"content":"PLAKAR-DESTINATION(1) General Commands Manual PLAKAR-DESTINATION(1) NAME plakar-destination \u0026#x2014; Manage Plakar restore destination configuration\nSYNOPSIS plakar destination subcommand ... DESCRIPTION The plakar destination command manages the configuration of destinations where Plakar will restore.\nThe configuration consists in a set of named entries, each of them describing a destination where a restore operation may happen.\nA destination is defined by at least a location, specifying the exporter to use, and some exporter-specific parameters.\nThe subcommands are as follows:\nadd name location [option=value ...] Create a new destination entry identified by name with the specified location describing the exporter to use. Additional exporter options can be set by adding option=value parameters. check name Check wether the exporter for the destination identified by name is properly configured. import [-config location] [-overwrite] [-rclone] [sections ...] Import destination configurations from various sources including files, piped input, or rclone configurations. By default, reads from stdin, allowing for piped input from other commands like plakar source show.\nThe -config option specifies a file or URL to read the configuration from.\nThe -overwrite option allows overwriting existing destination configurations with the same names.\nThe -rclone option treats the input as an rclone configuration, useful for importing rclone remotes as Plakar destinations.\nSpecific sections can be imported by listing their names.\nSections can be renamed during import by appending :newname.\nFor detailed examples and usage patterns, see the https://plakar.io/docs/v1.1.0/guides/importing-configurations/ Importing Configurations guide.\nping name Try to open the destination identified by name to make sure it is reachable. rm name Remove the destination identified by name from the configuration. set name [option=value ...] Set the option to value for the destination identified by name. Multiple option/value pairs can be specified. show [-secrets] [name ...] Display the current destinations configuration. If -secrets is specified, sensitive information such as passwords or tokens will be shown. unset name [option ...] Remove the option for the destination entry identified by name. EXIT STATUS The plakar-destination utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nPlakar September 11, 2025 PLAKAR-DESTINATION(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-destination/","section":"Docs","summary":"Manage Plakar restore destination configuration","title":"destination","type":"docs"},{"content":"PLAKAR-DIAG(1) General Commands Manual PLAKAR-DIAG(1) NAME plakar-diag \u0026#x2014; Display detailed information about Plakar internal structures\nSYNOPSIS plakar diag [contenttype | locks | object | packfile | snapshot | state | vfs | xattr] DESCRIPTION The plakar diag command provides detailed information about various internal data structures. The type of information displayed depends on the specified argument. Without any arguments, display information about the repository.\nThe sub-commands are as follows:\ncontenttype snapshotID:path \u0026#x00A0; locks Display the list of locks currently held on the repository. object objectID Display information about a specific object, including its mac, type, tags, and associated data chunks. packfile packfileID Show details of packfiles, including entries and macs, which store object data within the repository. snapshot snapshotID Show detailed information about a specific snapshot, including its metadata, directory and file count, and size. state List or describe the states in the repository. vfs snapshotID:path Show filesystem (VFS) details for a specific path within a snapshot, listing directory or file attributes, including permissions, ownership, and custom metadata. xattr snapshotID:path \u0026#x00A0; EXIT STATUS The plakar-diag utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Show repository information:\n$ plakar diag Show detailed information for a snapshot:\n$ plakar diag snapshot abc123 List all states in the repository:\n$ plakar diag state Display a specific object within a snapshot:\n$ plakar diag object 1234567890abcdef Display filesystem details for a path within a snapshot:\n$ plakar diag vfs abc123:/etc/passwd SEE ALSO plakar(1), plakar-backup(1)\nPlakar May 5, 2026 PLAKAR-DIAG(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-diag/","section":"Docs","summary":"Display detailed information about Plakar internal structures","title":"diag","type":"docs"},{"content":"PLAKAR-DIFF(1) General Commands Manual PLAKAR-DIFF(1) NAME plakar-diff \u0026#x2014; Show differences between files in a Plakar snapshots\nSYNOPSIS plakar diff [-highlight] [-recursive] snapshotID1[:path1] snapshotID2[:path2] DESCRIPTION The plakar diff command compares two Plakar snapshots, optionally restricting to specific files within them. If only snapshot IDs are provided, it compares the root directories of each snapshot. If file paths are specified, the command compares the individual files. The diff output is shown in unified diff format, with an option to highlight differences.\nThe options are as follows:\n-highlight Apply syntax highlighting to the diff output for readability. -recursive When comparing directories, recursively compare all subdirectories. EXIT STATUS The plakar-diff utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Compare root directories of two snapshots:\n$ plakar diff abc123 def456 Compare across snapshots with highlighting: /etc/passwd\n$ plakar diff -highlight abc123:/etc/passwd def456:/etc/passwd SEE ALSO plakar(1), plakar-backup(1)\nPlakar May 5, 2026 PLAKAR-DIFF(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-diff/","section":"Docs","summary":"Show differences between files in a Plakar snapshots","title":"diff","type":"docs"},{"content":"PLAKAR-DIGEST(1) General Commands Manual PLAKAR-DIGEST(1) NAME plakar-digest \u0026#x2014; Compute digests for files in a Plakar snapshot\nSYNOPSIS plakar digest [-hashing algorithm] snapshotID[:path] [...] DESCRIPTION The plakar digest command computes and displays digests for specified path in a the given snapshotID. Multiple snapshotID and path may be given. By default, the command computes the digest by reading the file contents.\nThe options are as follows:\n-hashing algorithm Use algorithm to compute the digest. Defaults to SHA256. EXIT STATUS The plakar-digest utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Compute the digest of a file within a snapshot:\n$ plakar digest abc123:/etc/passwd Use BLAKE3 as the digest algorithm:\n$ plakar digest -hashing BLAKE3 abc123:/etc/netstart SEE ALSO plakar(1)\nPlakar May 5, 2026 PLAKAR-DIGEST(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-digest/","section":"Docs","summary":"Compute digests for files in a Plakar snapshot","title":"digest","type":"docs"},{"content":"PLAKAR-DUP(1) General Commands Manual PLAKAR-DUP(1) NAME plakar-dup \u0026#x2014; Duplicates an existing snapshot with a different ID\nSYNOPSIS plakar dup snapshots ... DESCRIPTION The plakar dup command creates a duplicate of an existing snapshot with a new snapshot ID. The new snapshot is an exact copy of the original, including all files and metadata.\nEXIT STATUS The plakar-dup utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Create a duplicate of a snapshot with ID \u0026quot;abc123\u0026quot;:\n$ plakar dup abc123 SEE ALSO plakar(1)\nPlakar May 5, 2026 PLAKAR-DUP(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-dup/","section":"Docs","summary":"Duplicates an existing snapshot with a different ID","title":"dup","type":"docs"},{"content":"PLAKAR-INFO(1) General Commands Manual PLAKAR-INFO(1) NAME plakar-info \u0026#x2014; Display detailed information about internal structures\nSYNOPSIS plakar info [-errors] [snapshot] DESCRIPTION The plakar info command provides detailed information about a Plakar repository and snapshots. The type of information displayed depends on the specified argument. Without any arguments, display information about the repository.\nThe options are as follows:\n-errors Show errors within the specified snapshot. EXIT STATUS The plakar-info utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Show repository information:\n$ plakar info Show detailed information for a snapshot:\n$ plakar info abc123 Show errors within a snapshot:\n$ plakar info -errors abc123 SEE ALSO plakar(1), plakar-backup(1)\nPlakar May 5, 2026 PLAKAR-INFO(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-info/","section":"Docs","summary":"Display detailed information about internal structures","title":"info","type":"docs"},{"content":"PLAKAR-LOCATE(1) General Commands Manual PLAKAR-LOCATE(1) NAME plakar-locate \u0026#x2014; Find filenames in a Plakar snapshot\nSYNOPSIS plakar locate [-snapshot snapshotID] patterns ... DESCRIPTION The plakar locate command search snapshots to find file names matching any of the given patterns and prints the abbreviated snapshot ID and the full path of the matched files. Matching works according to the shell globbing rules.\nIf no -snapshot nor location flags are given, plakar locate will search in all snapshots.\nIn addition to the flags described below, plakar locate supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-snapshot snapshotID Limit the search to the given snapshot. EXIT STATUS The plakar-locate utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Search for files ending in \u0026#x201C;wd\u0026#x201D;:\n$ plakar locate '*wd' abc123:/etc/master.passwd abc123:/etc/passwd SEE ALSO plakar(1), plakar-backup(1), plakar-query(7)\nCAVEATS The patterns may have to be quoted to avoid the shell attempting to expand them.\nPlakar May 5, 2026 PLAKAR-LOCATE(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-locate/","section":"Docs","summary":"Find filenames in a Plakar snapshot","title":"locate","type":"docs"},{"content":"PLAKAR-LOGIN(1) General Commands Manual PLAKAR-LOGIN(1) NAME plakar-login \u0026#x2014; Authenticate to Plakar services\nSYNOPSIS plakar login [-no-spawn] [-status] [-email email | -env | -github] DESCRIPTION The plakar login command initiates an authentication flow with the Plakar platform. Login is optional for most plakar commands but required to enable certain services, such as alerting. See also plakar-service(1).\nOnly one authentication method may be specified per invocation: the -email, -env, and -github options are mutually exclusive. If neither is provided, -github is assumed.\nThe options are as follows:\n-email email Send a login link to the specified email address. Visiting that link will authenticate plakar. -env Persist the value of the PLAKAR_TOKEN environment variable into the configuration. Generate this token with plakar-token-create(1). -github Use GitHub OAuth to authenticate. A browser will be spawned to initiate the OAuth flow unless -no-spawn is specified. -no-spawn Do not automatically open a browser window for authentication flows. -status Check wether the user is currently logged in. This option cannot be used with any other options. EXAMPLES Start a login via email:\n$ plakar login -email user@example.com Authenticate via GitHub (default, opens browser):\n$ plakar login SEE ALSO plakar(1), plakar-logout(1), plakar-service(1)\nPlakar May 5, 2026 PLAKAR-LOGIN(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-login/","section":"Docs","summary":"Authenticate to Plakar services","title":"login","type":"docs"},{"content":"PLAKAR-LOGOUT(1) General Commands Manual PLAKAR-LOGOUT(1) NAME plakar-logout \u0026#x2014; Log out from Plakar services\nSYNOPSIS plakar logout DESCRIPTION The plakar logout command logs out an authenticated session with the Plakar platform.\nEXAMPLES Log out from the current session:\n$ plakar logout SEE ALSO plakar(1), plakar-login(1), plakar-service(1)\nPlakar July 8, 2025 PLAKAR-LOGOUT(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-logout/","section":"Docs","summary":"Log out from Plakar services","title":"logout","type":"docs"},{"content":"PLAKAR-LS(1) General Commands Manual PLAKAR-LS(1) NAME plakar-ls \u0026#x2014; List snapshots and their contents in a Plakar repository\nSYNOPSIS plakar ls [-uuid] [-recursive] [-tags] [snapshotID:path] DESCRIPTION The plakar ls command lists snapshots stored in a Plakar repository, and optionally displays the contents of path in a specified snapshot.\nIn addition to the flags described below, plakar ls supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-uuid Display the full UUID for each snapshot instead of the shorter snapshot ID. -recursive List directory contents recursively when exploring snapshot contents. -tags Show tags in snapshot listing. EXIT STATUS The plakar-ls utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES List all snapshots with their short IDs:\n$ plakar ls List all snapshots with UUIDs instead of short IDs:\n$ plakar ls -uuid List snapshots with a specific tag:\n$ plakar ls -tag daily-backup List contents of a specific snapshot:\n$ plakar ls abc123 Recursively list contents of a specific snapshot:\n$ plakar ls -recursive abc123:/etc SEE ALSO plakar(1), plakar-query(7)\nPlakar May 5, 2026 PLAKAR-LS(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-ls/","section":"Docs","summary":"List snapshots and their contents in a Plakar repository","title":"ls","type":"docs"},{"content":"PLAKAR-MAINTENANCE(1) General Commands Manual PLAKAR-MAINTENANCE(1) NAME plakar-maintenance \u0026#x2014; Remove unused data from a Plakar repository\nSYNOPSIS plakar maintenance DESCRIPTION The plakar maintenance command removes unused blobs, objects, and chunks from a Plakar repository to reduce storage space. It identifies unreferenced data and reorganizes packfiles to ensure only active snapshots and their dependencies are retained. The maintenance process updates snapshot indexes to reflect these changes.\nEXIT STATUS The plakar-maintenance utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nPlakar May 5, 2026 PLAKAR-MAINTENANCE(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-maintenance/","section":"Docs","summary":"Remove unused data from a Plakar repository","title":"maintenance","type":"docs"},{"content":"PLAKAR-MOUNT(1) General Commands Manual PLAKAR-MOUNT(1) NAME plakar-mount \u0026#x2014; Mount Plakar snapshots as read-only filesystem\nSYNOPSIS plakar mount [-to mountpoint] [snapshotID] DESCRIPTION The plakar mount command mounts a Plakar repository snapshot as a read-only filesystem at the specified mountpoint. This allows users to access snapshot contents as if they were part of the local file system, providing easy browsing and retrieval of files without needing to explicitly restore them. This command may not work on all Operating Systems.\nIn addition to the flags described below, plakar mount supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-to mountpoint Specify the mount location. The mountpoint can either be: A directory path for FUSE mounts An HTTP address including port for remote mounting (e.g., \u0026#x2018;http://hostname:8080\u0026#x2019;) If not specified, mount will attempt a FUSE mount in the working directory with a random subdirectory name. snapshotID Optional. Specifies which snapshot to mount. If not provided, all snapshots are mounted. EXIT STATUS The plakar-mount utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Mount all snapshots to a local directory:\n$ plakar mount -to ~/mnt Mount the latest snapshot to a local directory:\n$ plakar mount -to ~/mnt -latest Mount a specific snapshot by ID to a directory:\n$ plakar mount -to ~/mnt abc123 Mount snapshots matching a filter (e.g., snapshots with tag \u0026quot;daily-backup\u0026quot;):\n$ plakar mount -to ~/mnt -tag daily-backup Mount a snapshot to an HTTP endpoint:\n$ plakar mount -to http://hostname:8080 Mount a specific snapshot to an HTTP endpoint:\n$ plakar mount -to http://hostname:8080 abc123 SEE ALSO plakar(1), plakar-query(7)\nPlakar May 5, 2026 PLAKAR-MOUNT(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-mount/","section":"Docs","summary":"Mount Plakar snapshots as read-only filesystem","title":"mount","type":"docs"},{"content":"PLAKAR-PKG-ADD(1) General Commands Manual PLAKAR-PKG-ADD(1) NAME plakar-pkg-add \u0026#x2014; Install Plakar plugins\nSYNOPSIS plakar pkg add [-u] plugin ... DESCRIPTION The plakar pkg add command adds a local or a remote plugin.\nIf plugin matches an existing local file, it is installed directly. Otherwise, it is treated as a recipe name and downloaded from the Plakar plugin server which requires a login via the plakar-login(1) command.\nInstalling plugins without logging in is possible via the plakar-pkg-build(1) command, provided you have the necessary dependencies to build it locally (currently, official plugins require make and a working Go toolchain).\nTo install a specific version of a plugin, use the name@version syntax.\nThe options are as follows:\n-u Update the specific plugins. If none are given, attempt to update all the installed ones. FILES ~/.cache/plakar/plugins/ Plugin cache directory. Respects XDG_CACHE_HOME if set. ~/.local/share/plakar/plugins Plugin directory. Respects XDG_DATA_HOME if set. SEE ALSO plakar-login(1), plakar-pkg-build(1), plakar-pkg-create(1), plakar-pkg-rm(1), plakar-pkg-show(1)\nPlakar March 23, 2026 PLAKAR-PKG-ADD(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-pkg-add/","section":"Docs","summary":"Install Plakar plugins","title":"pkg-add","type":"docs"},{"content":"PLAKAR-PKG-BUILD(1) General Commands Manual PLAKAR-PKG-BUILD(1) NAME plakar-pkg-build \u0026#x2014; Build Plakar plugins from source\nSYNOPSIS plakar pkg build recipe.yaml DESCRIPTION The plakar pkg build fetches the sources and builds the plugin as specified in the given plakar-pkg-recipe.yaml(5). If it builds successfully, the resulting plugin will be created in the current working directory.\nENVIRONMENT PLAKAR_CLONE_TOKEN If set, this token will be used to authenticate git clone operations. This is useful for cloning private repositories. FILES ~/.cache/plakar/plugins/ Plugin cache directory. Respects XDG_CACHE_HOME if set. ~/.local/share/plakar/plugins Plugin directory. Respects XDG_DATA_HOME if set. SEE ALSO plakar-pkg-add(1), plakar-pkg-create(1), plakar-pkg-rm(1), plakar-pkg-show(1), plakar-pkg-recipe.yaml(5)\nPlakar July 11, 2025 PLAKAR-PKG-BUILD(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-pkg-build/","section":"Docs","summary":"Build Plakar plugins from source","title":"pkg-build","type":"docs"},{"content":"PLAKAR-PKG-CREATE(1) General Commands Manual PLAKAR-PKG-CREATE(1) NAME plakar-pkg-create \u0026#x2014; Package a plugin\nSYNOPSIS plakar pkg build manifest.yaml version DESCRIPTION The plakar pkg create assembles a plugin using the provided plakar-pkg-manifest.yaml(5) and version.\nAll the files needed for the plugin need to be already available, i.e. executables must be already be built.\nAll external files must reside in the same directory as the manifest.yaml or in subdirectories.\nSEE ALSO plakar-pkg-add(1), plakar-pkg-build(1), plakar-pkg-rm(1), plakar-pkg-show(1), plakar-pkg-manifest.yaml(5)\nPlakar July 11, 2025 PLAKAR-PKG-CREATE(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-pkg-create/","section":"Docs","summary":"Package a plugin","title":"pkg-create","type":"docs"},{"content":"PLAKAR-PKG-MANIFEST.YAML(5) File Formats Manual PLAKAR-PKG-MANIFEST.YAML(5) NAME manifest.yaml \u0026#x2014; Manifest for plugin assemblation\nDESCRIPTION The manifest.yaml file format describes how to package a plugin. No build or compilation is done, so all executables and other files must be prepared beforehand.\nmanifest.yaml must have a top-level YAML object with the following fields:\nname The name of the plugins display_name The displayed name in the UI. description A short description of the connectors. homepage A link to the homepage. license The license of the connectors. tag A YAML array of strings for tags that describe the connectors. api_version The API version supported. version The plugin version, which doubles as the git tag as well. It must follow semantic versioning and have a \u0026#x2018;v\u0026#x2019; prefix, e.g. \u0026#x2018;v1.2.3\u0026#x2019;. connectors A YAML array of objects with the following properties: type The connector type, one of importer, exporter, or store. protocols An array of YAML strings containing all the protocols that the connector supports. location_flags An optional array of YAML strings describing some properties of the connector. These properties are: localfs Whether paths given to this connector have to be made absolute. file Whether this store backend handles a Kloset in a sigle file, for e.g. a ptar file. executable Path to the plugin executable. extra_file An optional array of YAML string. These are extra files that need to be included in the package. EXAMPLES A sample manifest for the \u0026#x201C;fs\u0026#x201D; plugin is as follows:\n# manifest.yaml name: fs display_name: file system connector description: file storage but as external plugin homepage: https://github.com/PlakarKorp/integration-fs license: ISC tags: [ fs, filesystem, \u0026quot;local files\u0026quot; ] api_version: 1.0.0 version: 1.0.0 connectors: - type: importer executable: fs-importer protocols: [fs] - type: exporter executable: fs-exporter protocols: [fs] - type: storage executable: fs-store protocols: [fs] SEE ALSO plakar-pkg-create(1)\nPlakar July 20, 2025 PLAKAR-PKG-MANIFEST.YAML(5) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-pkg-manifest.yaml/","section":"Docs","summary":"Manifest for plugin assemblation","title":"pkg-manifest.yaml","type":"docs"},{"content":"PLAKAR-PKG-RECIPE.YAML(5) File Formats Manual PLAKAR-PKG-RECIPE.YAML(5) NAME recipe.yaml \u0026#x2014; Recipe to build Plakar plugins from source\nDESCRIPTION The recipe.yaml file format describes how to fetch and build Plakar plugins. It must have a top-level YAML object with the following fields:\nname The name of the plugins version The plugin version, which doubles as the git tag as well. It must follow semantic versioning and have a \u0026#x2018;v\u0026#x2019; prefix, e.g. \u0026#x2018;v1.2.3\u0026#x2019;. repository URL to the git repository holding the plugin. EXAMPLES A sample recipe to build the \u0026#x201C;fs\u0026#x201D; plugin is as follows:\n# recipe.yaml name: fs version: v1.0.0 repository: https://github.com/PlakarKorp/integrations-fs SEE ALSO plakar-pkg-build(1)\nPlakar July 11, 2025 PLAKAR-PKG-RECIPE.YAML(5) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-pkg-recipe.yaml/","section":"Docs","summary":"Recipe to build Plakar plugins from source","title":"pkg-recipe.yaml","type":"docs"},{"content":"PLAKAR-PKG-RM(1) General Commands Manual PLAKAR-PKG-RM(1) NAME plakar-pkg-rm \u0026#x2014; Uninstall Plakar plugins\nSYNOPSIS plakar pkg rm plugin ... DESCRIPTION The plakar pkg rm command removes plugins that have been previously installed with plakar-pkg-add(1) command.\nThe list of plugins can be obtained with plakar-pkg-show(1).\nEXAMPLES Removing a plugin:\n$ plakar pkg show epic-v1.2.3 $ plakar pkg rm epic-v1.2.3 SEE ALSO plakar-pkg-add(1), plakar-pkg-build(1), plakar-pkg-create(1), plakar-pkg-show(1)\nPlakar July 11, 2025 PLAKAR-PKG-RM(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-pkg-rm/","section":"Docs","summary":"Uninstall Plakar plugins","title":"pkg-rm","type":"docs"},{"content":"PLAKAR-PKG-SHOW(1) General Commands Manual PLAKAR-PKG-SHOW(1) NAME plakar-pkg-show \u0026#x2014; Show installed Plakar plugins\nSYNOPSIS plakar pkg show [-available] [-long] DESCRIPTION The plakar pkg show command shows the currently installed plugins.\nThe options are as follows:\n-available Instead of installed packages, show the set of prebuilt packages available for this system. -long Show the full package name. FILES ~/.cache/plakar/plugins/ Plugin cache directory. Respects XDG_CACHE_HOME if set. ~/.local/share/plakar/plugins Plugin directory. Respects XDG_DATA_HOME if set. SEE ALSO plakar-pkg-add(1), plakar-pkg-build(1), plakar-pkg-create(1), plakar-pkg-rm(1)\nPlakar July 11, 2025 PLAKAR-PKG-SHOW(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-pkg-show/","section":"Docs","summary":"Show installed Plakar plugins","title":"pkg-show","type":"docs"},{"content":"PLAKAR(1) General Commands Manual PLAKAR(1) NAME plakar \u0026#x2014; effortless backups\nSYNOPSIS plakar [-concurrency number] [-config dir] [-cpu number] [-json] [-keyfile path] [-quiet] [-silent] [-stdio] [-time] [-trace subsystems] [at kloset] subcommand ... DESCRIPTION plakar is a tool to create distributed, versioned backups with compression, encryption, and data deduplication.\nBy default, plakar operates on the Kloset store at ~/.plakar. This can be changed either by using the at option.\nThe following options are available:\n-concurrency number Set the maximum number of parallel tasks for faster processing. Defaults to the CPU count. -config dir Specify an alternate configuration directory. Defaults to ~/.config/plakar. -cpu number Limit the number of parallel workers plakar uses to number. By default it's the number of online CPUs. -json Use newline-delimited JSON as output format for some subcommands. -keyfile path Read the passphrase from the key file at path instead of prompting. Overrides the PLAKAR_PASSPHRASE environment variable. -quiet Disable all output except for errors. -silent Disable all output. -stdio Use text lines as output format for some subcommands instead of the default ncurses frontend. Enabled by default when the standard output is not a terminal. -time Report the time the subcommand took to run. -trace subsystems Display trace logs. subsystems is a comma-separated series of keywords to enable the trace logs for different subsystems: all, trace, repository, snapshot and server. at kloset Operates on the given kloset store. It could be a path, an URI, or a label in the form \u0026#x201C;@name\u0026#x201D; to reference a configuration created with plakar-store(1). General Commands help Show this manpage and the ones for the subcommands. login Authenticate to Plakar services, refer to plakar-login(1). logout Log out from Plakar services, refer to plakar-logout(1). service Manage additional Plakar services that require you to be logged in, refer to plakar-service(1). token create Generate a token to interact with Plakar services, refer to plakar-token-create(1). version Display the current Plakar version, refer to plakar-version(1). Configuration management destination Manage configurations for the destination connectors, refer to plakar-destination(1). source Manage configurations for the source connectors, refer to plakar-source(1). store Manage configurations for storage connectors, refer to plakar-store(1). Kloset management check Check data integrity in a Kloset store, refer to plakar-check(1). create Create a new Kloset store, refer to plakar-create(1). info Display detailed information about internal structures, refer to plakar-info(1). maintenance Remove unused data from a Kloset store, refer to plakar-maintenance(1). prune Prune snapshots according to a policy, refer to plakar-prune(1). ptar Create a .ptar archive, refer to plakar-ptar(1). server Start a Plakar server, refer to plakar-server(1). sync Synchronize snapshots between Kloset stores, refer to plakar-sync(1). ui Serve the Plakar web user interface, refer to plakar-ui(1). Snapshot management archive Create an archive from a Kloset snapshot, refer to plakar-archive(1). backup Create a new Kloset snapshot, refer to plakar-backup(1). cat Display file contents from a Kloset snapshot, refer to plakar-cat(1). diff Show differences between files in a Kloset snapshot, refer to plakar-diff(1). digest Compute digests for files in a Kloset snapshot, refer to plakar-digest(1). dup Duplicate an existing snapshot with a different ID, refer to plakar-dup(1). locate Find filenames in a Kloset snapshot, refer to plakar-locate(1). ls List snapshots and their contents in a Kloset store, refer to plakar-ls(1). mount Mount Kloset snapshots as a read-only filesystem, refer to plakar-mount(1). restore Restore files from a Kloset snapshot, refer to plakar-restore(1). rm Remove snapshots from a Kloset store, refer to plakar-rm(1). Plugin handling pkg add Install a plugin, refer to plakar-pkg-add(1). pkg build Build a plugin from source, refer to plakar-pkg-build(1). pkg create Package a plugin, refer to plakar-pkg-create(1). pkg rm Uninstall a plugin, refer to plakar-pkg-rm(1). pkg show List installed plugins, refer to plakar-pkg-show(1). ENVIRONMENT PLAKAR_PASSPHRASE Passphrase to unlock the Kloset store; overrides the one from the configuration. If set, plakar won't prompt to unlock. The option keyfile overrides this environment variable. PLAKAR_REPOSITORY Reference to the Kloset store. PLAKAR_TOKEN Token to authenticate for Plakar services. FILES ~/.cache/plakar Plakar cache directories. ~/.config/plakar/destinations.yml Restore destinations configuration. ~/.config/plakar/sources.yml Backup sources configuration. ~/.config/plakar/stores.yml Kloset stores configuration. ~/.plakar Default Kloset store location. EXIT STATUS The following exit codes are aligned with sysexits(3) where applicable:\n0 Command completed successfully. 1 A general error occurred. 64 (EX_USAGE) Invalid command-line arguments or flags. 65 (EX_DATAERR) Data integrity check failed (corrupted chunks, verification mismatch). 66 (EX_NOINPUT) The repository could not be opened or located. 77 (EX_NOPERM) Authentication or decryption failure (wrong passphrase, missing keyfile). 78 (EX_CONFIG) Incompatible repository version. EXAMPLES Create an encrypted Kloset store at the default location:\n$ plakar create Create an encrypted Kloset store on AWS S3:\n$ plakar store add mys3bucket \\ location=s3://s3.eu-west-3.amazonaws.com/backups \\ access_key=\u0026quot;access_key\u0026quot; \\ secret_access_key=\u0026quot;secret_key\u0026quot; $ plakar at @mys3bucket create Create a snapshot of the current directory on the @mys3bucket Kloset store:\n$ plakar at @mys3bucket backup List the snapshots of the default Kloset store:\n$ plakar ls Restore the file \u0026#x201C;notes.md\u0026#x201D; in the current directory from the snapshot with id \u0026#x201C;abcd\u0026#x201D;:\n$ plakar restore -to . abcd:notes.md Remove snapshots older than 30 days:\n$ plakar rm -before 30d Plakar May 5, 2026 PLAKAR(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar/","section":"Docs","summary":"effortless backups","title":"plakar","type":"docs"},{"content":"","date":"6 May 2026","externalUrl":null,"permalink":"/","section":"Plakar | The Open Standard for Backup and Restore","summary":"","title":"Plakar | The Open Standard for Backup and Restore","type":"page"},{"content":"PLAKAR-POLICY(1) General Commands Manual PLAKAR-POLICY(1) NAME plakar-policy \u0026#x2014; Manage Plakar retention policies\nSYNOPSIS plakar policy subcommand ... DESCRIPTION The plakar policy command manages the retention policies for plakar-prune(1).\nThe configuration consists in a set of named entries, each of them describing a retention policy.\nThe subcommands are as follows:\nadd name [option=value ...] Create a new source entry identified by name. Additional parameters can be set by adding option=value parameters. rm name Remove the policy identified by name from the configuration. set name [option=value ...] Set the option to value for the source identified by name. Multiple option/value pairs can be specified. show [-json] [-yaml] [name ...] Display the current sources configuration. -json and -yaml control the output format, which is YAML by default. unset name [option ...] Remove the option for the policy identified by name. The available options as described in plakar-query(7): each option corresponds the similarly named flag.\nEXIT STATUS The plakar-policy utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Create a policy \u0026#x2018;weekly\u0026#x2019; that keeps one backup per week and discards backups older than three months:\n$ plakar policy add weekly $ plakar policy set weekly since='3 months' $ plakar policy set weekly per-week=1 Prune snapshots accordingly to the \u0026#x2018;weekly\u0026#x2019; policy:\n$ plakar prune -policy weekly SEE ALSO plakar(1), plakar-prune(1)\nPlakar September 11, 2025 PLAKAR-POLICY(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-policy/","section":"Docs","summary":"Manage Plakar retention policies","title":"policy","type":"docs"},{"content":"PLAKAR-PRUNE(1) General Commands Manual PLAKAR-PRUNE(1) NAME plakar-prune \u0026#x2014; Prune snapshots according to a policy\nSYNOPSIS plakar prune [-apply] [-policy name] [snapshotID ...] DESCRIPTION The plakar prune command deletes snapshots from a Plakar repository. Snapshots can be filtered for deletion by age, by tag, or by specifying the snapshot IDs to remove. If no snapshotID are provided, either -older or -tag must be specified to filter the snapshots to delete.\nplakar prune supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe arguments are as follows:\n-apply Delete matching snapshot. The default is to just show the snapshot that would be removed but not actually execute the operation. -policy name Use the given policy. See plakar-policy(1) for how policies are managed. EXIT STATUS The plakar-prune utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Remove a specific snapshot by ID:\n$ plakar prune abc123 Remove snapshots older than 30 days:\n$ plakar prune -days 30 Remove snapshots with a specific tag:\n$ plakar prune -tag daily-backup Remove snapshots older than 1 year with a specific tag:\n$ plakar prune -years 1 -tag daily-backup SEE ALSO plakar(1), plakar-backup(1), plakar-policy(1), plakar-query(7)\nPlakar May 5, 2026 PLAKAR-PRUNE(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-prune/","section":"Docs","summary":"Prune snapshots according to a policy","title":"prune","type":"docs"},{"content":"PLAKAR-PTAR(1) General Commands Manual PLAKAR-PTAR(1) NAME plakar-ptar \u0026#x2014; generate a self-contained Kloset archive (.ptar)\nSYNOPSIS plakar ptar [-plaintext] [-overwrite] [-k location] -o file.ptar [path ...] DESCRIPTION The plakar ptar command creates a single portable archive (a \u0026#x2018;.ptar\u0026#x2019; file) that bundles one or more existing Plakar repositories (\u0026#x201C;klosets\u0026#x201D;) and/or arbitrary filesystem paths into a self-contained package. The resulting archive preserves repository metadata, snapshots and data chunks, and is compressed and encrypted for secure transport or off-site storage.\nAt least one data source must be supplied: either one or more -k or -kloset options naming remote or local kloset repositories, and/or one or more path arguments identifying files or directories to back up. The destination archive name is mandatory and supplied with -o.\nUnless the -overwrite flag is given, plakar ptar refuses to replace an existing archive.\nThe options are as follows:\n-plaintext Disable transparent encryption of the archive. If omitted, plakar ptar encrypts repository data using a key derived from the passphrase specified via PLAKAR_PASSPHRASE or prompted interactively. -overwrite Overwrite an existing .ptar file at the destination path. -k location, -kloset location Add a kloset repository to include in the archive. May be specified multiple times to bundle several repositories. -o file.ptar Path of the archive to create. This option is required. path ... Zero or more filesystem paths to back up directly into the archive. ENVIRONMENT PLAKAR_PASSPHRASE Passphrase used to derive the encryption key when encryption is enabled. EXIT STATUS The plakar-ptar utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1), plakar-backup(1), plakar-create(1)\nPlakar May 5, 2026 PLAKAR-PTAR(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-ptar/","section":"Docs","summary":"generate a self-contained Kloset archive (.ptar)","title":"ptar","type":"docs"},{"content":"PLAKAR-QUERY(7) Miscellaneous Information Manual PLAKAR-QUERY(7) NAME plakar-query \u0026#x2014; query flags shared among many Plakar subcommands\nDESCRIPTION What follows is a set of command line arguments that many plakar(1) subcommands provide to filter snapshots.\nThere are two kind of flags:\nmatchers These allow to select snapshots. If combined, the result is the union of the various matchers. filters These instead filter the output of the matchers by yielding snapshots matching only certain criterias. If combined, the result is the intersection of the various filters. If no matcher is given, all the snapshots are implicitly selected, and then filtered according to the given filters, if any.\nThe matchers are divided into:\nmatchers that select snapshots from the last n unit of time: -minutes n \u0026#x00A0; -hours n \u0026#x00A0; -days n \u0026#x00A0; -weeks n \u0026#x00A0; -months n \u0026#x00A0; -years n \u0026#x00A0; Or that selects snapshots that were done during the last n days of the week:\n-mondays n \u0026#x00A0; -thuesdays n \u0026#x00A0; -wednesdays n \u0026#x00A0; -thursdays n \u0026#x00A0; -fridays n \u0026#x00A0; -saturdays n \u0026#x00A0; -sundays n \u0026#x00A0; matchers that select at most n snapshots per time period: -per-minute n \u0026#x00A0; -per-hour n \u0026#x00A0; -per-day n \u0026#x00A0; -per-week n \u0026#x00A0; -per-month n \u0026#x00A0; -per-year n \u0026#x00A0; -per-monday n \u0026#x00A0; -per-thuesday n \u0026#x00A0; -per-wednesday n \u0026#x00A0; -per-thursday n \u0026#x00A0; -per-friday n \u0026#x00A0; -per-saturday n \u0026#x00A0; -per-sunday n \u0026#x00A0; The filters are:\n-before date Select snapshots older than given date. The date may be in RFC3339 format, as \u0026#x201C;YYYY-mm-DD HH:MM\u0026#x201D;, \u0026#x201C;YYYY-mm-DD HH:MM:SS\u0026#x201D;, \u0026#x201C;YYYY-mm-DD\u0026#x201D;, or \u0026#x201C;YYYY/mm/DD\u0026#x201D; where YYYY is a year, mm a month, DD a day, HH a hour in 24 hour format number, MM minutes and SS the number of seconds. Alternatively, human-style intervals like \u0026#x201C;half an hour\u0026#x201D;, \u0026#x201C;a month\u0026#x201D; or \u0026#x201C;2h30m\u0026#x201D; are also accepted.\n-category name Select snapshot whose category is name. -environment name Select snapshot whose environment is name. -job name Select snapshot whose job is name. -latest Select only the latest snapshot. -name name Select snapshots whose name is name. -perimeter name Select snapshots whose perimeter is name. -root path Select snapshots whose root directory is path. May be specified multiple time, snapshots are selected if any of the given paths matches. -since date Select snapshots newer than the given date. The accepted format is the same as -before. -tag name Select snapshots tagged with name. May be specified multiple times, and multiple tags may be given at the same time if comma-separated. If a tag name is prefixed with an exclamation mark \u0026#x2018;!\u0026#x2019;, the matching is inverted and the snapshot is ignored if it contains said tag. Plakar November 28, 2025 PLAKAR-QUERY(7) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-query/","section":"Docs","summary":"query flags shared among many Plakar subcommands","title":"query","type":"docs"},{"content":"PLAKAR-RESTORE(1) General Commands Manual PLAKAR-RESTORE(1) NAME plakar-restore \u0026#x2014; Restore files from a Plakar snapshot\nSYNOPSIS plakar restore [-category category] [-environment environment] [-job job] [-name name] [-perimeter perimeter] [-skip-permissions] [-tag tag] [-to directory] [-o option=value] [snapshotID:path ...] DESCRIPTION The plakar restore command is used to restore files and directories at path from a specified Plakar snapshot to the local file system. If path is omitted, then all the files in the specified snapshotID are restored. If no snapshotID is provided, the command attempts to restore the current working directory from the last matching snapshot.\nThe options are as follows:\n-name string Only apply command to snapshots that match name. -category string Only apply command to snapshots that match category. -environment string Only apply command to snapshots that match environment. -perimeter string Only apply command to snapshots that match perimeter. -job string Only apply command to snapshots that match job. -tag string Only apply command to snapshots that match tag. -skip-permissions Skip restoring file permissions and ownership during restore, defaulting to 0750 for directories and 0640 for files. -to directory Specify the base directory to which the files will be restored. If omitted, files are restored to the current working directory. -o option=value Can be used to pass extra arguments to the destination connector. The given option takes precedence over the configuration file. EXIT STATUS The plakar-restore utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Restore all files from a specific snapshot to the current directory:\n$ plakar restore abc123 Restore to a specific directory:\n$ plakar restore -to /mnt/ abc123 Restore specific path to a specific directory:\n$ plakar restore -to /mnt/ abc123:/etc/apache2 Restore to a specific destination:\n$ plakar restore -to @s3target abc123 Restore specific path to a specific destination :\n$ plakar restore -to @s3target abc123:/etc/apache2 SEE ALSO plakar(1), plakar-backup(1)\nPlakar May 5, 2026 PLAKAR-RESTORE(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-restore/","section":"Docs","summary":"Restore files from a Plakar snapshot","title":"restore","type":"docs"},{"content":"PLAKAR-RM(1) General Commands Manual PLAKAR-RM(1) NAME plakar-rm \u0026#x2014; Remove snapshots from a Plakar repository\nSYNOPSIS plakar rm [-apply] [snapshotID ...] DESCRIPTION The plakar rm command deletes snapshots from a Plakar repository. Snapshots can be filtered for deletion by age, by tag, or by specifying the snapshot IDs to remove. If no snapshotID are provided, either -older or -tag must be specified to filter the snapshots to delete.\nIn addition to the flags described below, plakar ls supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe arguments are as follows:\n-apply Delete the matching snapshots. By default, plakar rm only prints the snapshots that would be deleted. EXIT STATUS The plakar-rm utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Remove a specific snapshot by ID:\n$ plakar rm abc123 Remove snapshots older than 30 days:\n$ plakar rm -before 30d Remove snapshots with a specific tag:\n$ plakar rm -tag daily-backup Remove snapshots older than 1 year with a specific tag:\n$ plakar rm -before 1y -tag daily-backup SEE ALSO plakar(1), plakar-backup(1)\nPlakar May 5, 2026 PLAKAR-RM(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-rm/","section":"Docs","summary":"Remove snapshots from a Plakar repository","title":"rm","type":"docs"},{"content":"PLAKAR-SCHEDULER(1) General Commands Manual PLAKAR-SCHEDULER(1) NAME plakar-scheduler \u0026#x2014; Run the Plakar scheduler\nSYNOPSIS plakar scheduler [-foreground] [start -tasks configfile] [stop] DESCRIPTION The plakar scheduler runs in the background and manages task execution based on the defined schedule.\nThe options are as follows:\n-foreground Run the scheduler in the foreground instead of as a background service. -tasks configfile Specify the configuration file that contains the task definitions and schedules. start -tasks configfile Starts the scheduler service and its tasks from configfile. stop Stop the currently running scheduler service. EXIT STATUS The plakar-scheduler utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nPlakar May 5, 2026 PLAKAR-SCHEDULER(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-scheduler/","section":"Docs","summary":"Run the Plakar scheduler","title":"scheduler","type":"docs"},{"content":"PLAKAR-SERVER(1) General Commands Manual PLAKAR-SERVER(1) NAME plakar-server \u0026#x2014; Start a Plakar server\nSYNOPSIS plakar server [-allow-delete] [-listen [host]:port] [-cert path] [-key path] DESCRIPTION The plakar server command starts a Plakar server instance at the provided address, allowing remote interaction with a Kloset store over a network.\nThe options are as follows:\n-allow-delete Enable delete operations. By default, delete operations are disabled to prevent accidental data loss. -listen [host]:port The host and port where to listen to, separated by a colon. The host name is optional, and defaults to all available addresses. If -listen is not provided, the server defaults to listen on localhost at port 9876. -cert path Path to a full certificate file in PEM format. If both -cert and -key are provided, the server will expect https connections. If one or both are missing, the server will fall back to http. -key path Path to a certificate private key file in PEM format. EXIT STATUS The plakar-server utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Start a plakar server on the local store:\n$ plakar server Start a plakar server on a remote store:\n$ plakar at sftp://example.org server Start a server on a specific address and port:\n$ plakar server -listen 127.0.0.1:12345 Start a https server on a specific address and port:\n$ plakar server -listen backup.example.com:12345 -cert fullchain.pem -key privkey.pem SEE ALSO plakar(1)\nCAVEATS When a host name is provided, plakar server uses only one of the IP addresses it resolves to, preferably IPv4 .\nPlakar May 5, 2026 PLAKAR-SERVER(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-server/","section":"Docs","summary":"Start a Plakar server","title":"server","type":"docs"},{"content":"PLAKAR-SERVICE(1) General Commands Manual PLAKAR-SERVICE(1) NAME plakar-service \u0026#x2014; Manage optional Plakar-connected services\nSYNOPSIS plakar service list plakar service add name [key=value ...] plakar service rm name plakar service status name plakar service show name plakar service enable name plakar service disable name plakar service set name [key=value ...] plakar service unset name [key ...] DESCRIPTION The plakar service command allows you to enable, disable, and inspect additional services that integrate with the plakar platform via plakar-login(1) authentication. These services connect to the plakar.io infrastructure, and should only be enabled if you agree to transmit non-sensitive operational data to plakar.io.\nAll subcommands require prior authentication via plakar-login(1).\nServices are managed by the backend and discovered at runtime. For example, when the \u0026#x201C;alerting\u0026#x201D; service is enable, it will:\nSend email notifications when operations fail. Expose the latest alerting reports in the Plakar UI (see plakar-ui(1)). By default, all services are disabled.\nSUBCOMMANDS list Display the list of available services. add name [key=value ...] Set the configuration for the service identified by name and enable it. The configuration is defined by the given set of key/value pairs. The existing configuration, if any, is discarded. rm name Disable the service identified by name and discard its configuration. status name Display the current status (enabled or disabled) of the named service. show name Display the configuration for the specified service. enable name Enable the specified service. disable name Disable the specified service. set name [key=value ...] Set the configuration key to value for the service identified by name. Multiple key/value pairs can be specified. unset name [key ...] Unset the configuration key for the service identified by name. Multiple keys can be specified. EXAMPLES Check the status of the alerting service:\n$ plakar service status alerting Enable alerting:\n$ plakar service enable alerting Disable alerting:\n$ plakar service disable alerting SEE ALSO plakar-login(1), plakar-ui(1)\nPlakar August 7, 2025 PLAKAR-SERVICE(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-service/","section":"Docs","summary":"Manage optional Plakar-connected services","title":"service","type":"docs"},{"content":"PLAKAR-SOURCE(1) General Commands Manual PLAKAR-SOURCE(1) NAME plakar-source \u0026#x2014; Manage Plakar backup source configuration\nSYNOPSIS plakar source subcommand ... DESCRIPTION The plakar source command manages the configuration of data sources for Plakar to backup.\nThe configuration consists in a set of named entries, each of them describing a source for a backup operation.\nA source is defined by at least a location, specifying the importer to use, and some importer-specific parameters.\nThe subcommands are as follows:\nadd name location [option=value ...] Create a new source entry identified by name with the specified location describing the importer to use. Additional importer options can be set by adding option=value parameters. check name Check wether the importer for the source identified by name is properly configured. import [-config location] [-overwrite] [-rclone] [sections ...] Import source configurations from various sources including files, piped input, or rclone configurations. By default, reads from stdin, allowing for piped input from other commands.\nThe -config option specifies a file or URL to read the configuration from.\nThe -overwrite option allows overwriting existing source configurations with the same names.\nThe -rclone option treats the input as an rclone configuration, useful for importing rclone remotes as Plakar sources.\nSpecific sections can be imported by listing their names.\nSections can be renamed during import by appending :newname.\nFor detailed examples and usage patterns, see the https://plakar.io/docs/v1.1.0/guides/importing-configurations/ Importing Configurations guide.\nping name Try to open the data source identified by name to make sure it is reachable. rm name Remove the source identified by name from the configuration. set name [option=value ...] Set the option to value for the source identified by name. Multiple option/value pairs can be specified. show [-secrets] [name ...] Display the current sources configuration. If -secrets is specified, sensitive information such as passwords or tokens will be shown. unset name [option ...] Remove the option for the source entry identified by name. EXIT STATUS The plakar-source utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nPlakar September 11, 2025 PLAKAR-SOURCE(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-source/","section":"Docs","summary":"Manage Plakar backup source configuration","title":"source","type":"docs"},{"content":"PLAKAR-STORE(1) General Commands Manual PLAKAR-STORE(1) NAME plakar-store \u0026#x2014; Manage Plakar store configurations\nSYNOPSIS plakar store subcommand ... DESCRIPTION The plakar store command manages the Plakar store configurations.\nThe configuration consists in a set of named entries, each of them describing a Plakar store holding backups.\nA store is defined by at least a location, specifying the storage implementation to use, and some storage-specific parameters.\nThe subcommands are as follows:\nadd name location [option=value ...] Create a new store entry identified by name with the specified location. Specific additional configuration parameters can be set by adding option=value parameters. check name Check wether the store identified by name is properly configured. import [-config location] [-overwrite] [-rclone] [sections ...] Import store configurations from various sources including files, piped input, or rclone configurations. By default, reads from stdin, allowing for piped input from other commands.\nThe -config option specifies a file or URL to read the configuration from.\nThe -overwrite option allows overwriting existing store configurations with the same names.\nThe -rclone option treats the input as an rclone configuration, useful for importing rclone remotes as Plakar stores.\nSpecific sections can be imported by listing their names.\nSections can be renamed during import by appending :newname.\nFor detailed examples and usage patterns, see the https://plakar.io/docs/v1.1.0/guides/importing-configurations/ Importing Configurations guide.\nping name Try to connect to the store identified by name to make sure it is reachable. rm name Remove the store identified by name from the configuration. set name [option=value ...] Set the option to value for the store identified by name. Multiple option/value pairs can be specified. show [-secrets] [name ...] Display the current stores configuration. If -secrets is specified, sensitive information such as passwords or tokens will be shown. unset name [option ...] Remove the option for the store entry identified by name. EXIT STATUS The plakar-store utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nPlakar May 5, 2026 PLAKAR-STORE(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-store/","section":"Docs","summary":"Manage Plakar store configurations","title":"store","type":"docs"},{"content":"PLAKAR-SYNC(1) General Commands Manual PLAKAR-SYNC(1) NAME plakar-sync \u0026#x2014; Synchronize snapshots between Plakar repositories\nSYNOPSIS plakar sync [-cache path] [-packfiles path] [snapshotID] to | from | with repository DESCRIPTION The plakar sync command synchronize snapshots between two Plakar repositories. If a specific snapshot ID is provided, only snapshots with matching IDs will be synchronized.\nplakar sync supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-cache path Specify a path to store the vfs cache. Use the special value \u0026#x2018;no\u0026#x2019; to disable caching. Use the special value \u0026#x2018;vfs\u0026#x2019; to use the in-memory vfs cache (the default). -packfiles path Path where to put the temporary packfiles instead of building them in the default temporary directory. If the special value \u0026#x2018;memory\u0026#x2019; is specified then the packfiles are build in memory. The arguments are as follows:\nto | from | with Specifies the direction of synchronization: to Synchronize snapshots from the local repository to the specified peer repository. from Synchronize snapshots from the specified peer repository to the local repository. with Synchronize snapshots in both directions, ensuring both repositories are fully synchronized. repository Path to the peer repository to synchronize with. EXIT STATUS The plakar-sync utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Synchronize the snapshot \u0026#x2018;abcd\u0026#x2019; with a peer repository:\n$ plakar sync abcd to @peer Bi-directional synchronization with peer repository of recent snapshots:\n$ plakar sync -since 7d with @peer Synchronize all snapshots of @peer to @repo:\n$ plakar at @repo sync from @peer SEE ALSO plakar(1), plakar-query(7)\nPlakar May 5, 2026 PLAKAR-SYNC(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-sync/","section":"Docs","summary":"Synchronize snapshots between Plakar repositories","title":"sync","type":"docs"},{"content":"PLAKAR-TOKEN-CREATE(1) General Commands Manual PLAKAR-TOKEN-CREATE(1) NAME plakar-token-create \u0026#x2014; Create a token to authenticate to Plakar services\nSYNOPSIS plakar token create DESCRIPTION The plakar token create command generates a token that can be used to authenticate with plakar-login(1).\nEXAMPLES Generate a token:\n$ plakar token create and then use it on a different machine to log in automatically:\n$ export PLAKAR_TOKEN=... $ plakar login -env SEE ALSO plakar(1), plakar-login(1), plakar-service(1)\nPlakar May 5, 2026 PLAKAR-TOKEN-CREATE(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-token-create/","section":"Docs","summary":"Create a token to authenticate to Plakar services","title":"token-create","type":"docs"},{"content":"PLAKAR-UI(1) General Commands Manual PLAKAR-UI(1) NAME plakar-ui \u0026#x2014; Serve the Plakar web user interface\nSYNOPSIS plakar ui [-addr address] [-cors] [-no-auth] [-no-spawn] [-cert path] [-key path] DESCRIPTION The plakar ui command serves the Plakar web user interface. By default, it opens the default web browser.\nThe options are as follows:\n-addr address Specify the address and port for the UI to listen on separated by a colon, (e.g. localhost:8080). If omitted, plakar ui listens on localhost on a random port. -cors Set the \u0026#x2018;Access-Control-Allow-Origin\u0026#x2019; HTTP headers to allow the UI to be accessed from any origin. -no-auth Disable the authentication token that otherwise is needed to consume the exposed HTTP APIs. -no-spawn Do not automatically open the web browser. -cert path Path to a full certificate file in PEM format. If both -cert and -key are provided, the server will expect https connections. If one or both are missing, the server will fall back to http. -key path Path to a certificate private key file in PEM format. EXIT STATUS The plakar-ui utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Using a custom address and disable automatic browser execution:\n$ plakar ui -addr localhost:9090 -no-spawn Create a https server with a custom certificate: $ plakar ui -cert fullchain.pem -key privkey.pem SEE ALSO plakar(1)\nPlakar May 5, 2026 PLAKAR-UI(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-ui/","section":"Docs","summary":"Serve the Plakar web user interface","title":"ui","type":"docs"},{"content":"PLAKAR-VERSION(1) General Commands Manual PLAKAR-VERSION(1) NAME plakar-version \u0026#x2014; Display the current Plakar version\nSYNOPSIS plakar version DESCRIPTION The plakar version command displays the current version of Plakar.\nSEE ALSO plakar(1)\nPlakar July 3, 2025 PLAKAR-VERSION(1) ","date":"6 May 2026","externalUrl":null,"permalink":"/docs/main/references/commands/plakar-version/","section":"Docs","summary":"Display the current Plakar version","title":"version","type":"docs"},{"content":"","date":"3 April 2026","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"TL;DR:\nWe built a PostgreSQL integration for Plakar that covers both logical backups (pg_dump / pg_dumpall) and physical backups (pg_basebackup), making database backups as straightforward as any other Plakar backup: no scripts, no glue code.\nIf there is one feature request that comes up more than any other in the Plakar community, it is database backups.\nAnd that makes sense. Databases hold the data that matters most. They are the hardest thing to lose and often the hardest thing to restore correctly.\nFor a while, a workaround existed. Plakar has a stdin integration that can ingest anything piped into it, so it was already possible to back up a PostgreSQL database by doing something like:\n$ pg_dump mydb | plakar backup stdin://dump.sql It works.\nBut it is manual, error-prone, and requires writing and maintaining shell scripts or cron jobs. That is exactly what Plakar is supposed to save you from.\nThe goal of this integration is simple: backing up a PostgreSQL database should be as easy as backing up anything else with Plakar.\nBackup strategies for PostgreSQL # A PostgreSQL server hosts a cluster made up of multiple databases, each containing schemas, tables, views, sequences, extensions, roles, and tablespaces. Backing up a cluster properly means capturing all of that, not just the raw table data.\nPostgreSQL has two fundamentally different backup approaches, and choosing between them involves real tradeoffs.\nLogical backups # Logical backups use pg_dump (for a single database) or pg_dumpall (for an entire cluster) to produce SQL or custom-format dumps. They work at the SQL level: they reconstruct the structure and data of your databases as a series of statements.\nPros:\nPortable across versions. You can restore a logical backup onto a different PostgreSQL major version. Selective restore. With pg_dump\u0026rsquo;s custom format, you can restore individual tables or schemas without restoring the entire database. Works with managed services. Amazon RDS, Google Cloud SQL, Supabase, and similar managed platforms do not expose the underlying data directory, so logical backups are often the only option. No server downtime required. pg_dump runs against a live server and uses PostgreSQL\u0026rsquo;s MVCC to produce a consistent snapshot. Cons:\nRestore requires a running PostgreSQL server. You cannot simply copy the output and start it. You need a working server instance to load the dump into. Slow on large databases. Dumping and restoring large datasets involves a lot of SQL processing. A physical backup of the same data will typically be faster to both create and restore. Physical backups # Physical backups use pg_basebackup to copy the raw data directory from the server\u0026rsquo;s replication interface. They operate at the file system level, streaming the actual pages PostgreSQL writes to disk.\nPros:\nFast and complete. A physical backup captures everything in one shot: all databases, configuration files, WAL segments. There is no SQL processing overhead. Directly startable. The restored directory can be handed to a PostgreSQL binary and started immediately, with no import step. Consistent under load. pg_basebackup uses the replication protocol, which guarantees a crash-consistent snapshot even on a busy server. Cons:\nVersion-locked. A physical backup must be restored with the same major PostgreSQL version. No selective restore. You restore the entire cluster. There is no straightforward way to extract a single table or schema from a physical backup without starting the server and doing a logical export afterward. Requires replication privileges. The backup user must have the REPLICATION privilege (or be a superuser), and the server must be configured with wal_level = replica or higher. Not available on most managed services. Providers do not expose the replication interface for direct pg_basebackup use. Which one to use? # Use logical backups when portability matters: cross-version migrations, managed cloud databases, or when you need to restore individual objects rather than a whole cluster.\nUse physical backups when speed and recoverability matter: large self-hosted clusters, disaster recovery setups, or environments where restore time is critical.\nBoth strategies are available through this integration, using two different URI schemes.\nInstalling the PostgreSQL integration # The integration is only available for plakar v1.1.0-beta.7 and above.\nFirst, install plakar:\n$ go install github.com/PlakarKorp/plakar@v1.1.0-beta.7 Then install the integration:\n$ plakar pkg add postgresql Or build it yourself from the source repository:\n$ plakar pkg build postgresql postgresql_v1.1.0-beta.2_darwin_arm64.ptar $ plakar pkg add ./postgresql_v1.1.0-beta.2_darwin_arm64.ptar The machine running Plakar also needs the standard PostgreSQL client tools in $PATH. On Debian/Ubuntu: apt install postgresql-client. On macOS via Homebrew: brew install postgresql.\nLogical backups (postgres://) # The postgres:// URI scheme triggers logical backups using pg_dump or pg_dumpall.\nBacking up a single database # Point the URI at a specific database and Plakar does the rest:\n$ plakar source add mypg \\ postgres://postgres:secret@db.example.com/myapp $ plakar at /var/backups backup @mypg Three records are stored in the snapshot:\n/manifest.json: cluster metadata captured at backup time (more on this below) /globals.sql: roles and tablespaces from the whole cluster (pg_dumpall --globals-only) /myapp.dump: the database itself in pg_dump custom format Backing up all databases # Omit the database name from the URI to back up everything:\n$ plakar source add mypg postgres://postgres:secret@db.example.com/ $ plakar backup @mypg This runs pg_dumpall and stores two records: /manifest.json and /all.sql, which contains all databases, roles, and tablespaces.\nPhysical backups (postgres+bin://) # The postgres+bin:// URI scheme triggers a physical backup using pg_basebackup.\nThe server must have wal_level = replica or higher in postgresql.conf, and the backup user must have the REPLICATION privilege (or be a superuser).\n$ plakar source add mypg postgres+bin://replicator:secret@db.example.com $ plakar backup @mypg The entire data directory is streamed file by file into the snapshot, preserving paths, permissions, and timestamps. A /manifest.json record is also written alongside the backup data (more on this below).\nRestore # Logical backups # Logical restores go through the postgres:// exporter:\n# Restore a single database (created automatically if it doesn\u0026#39;t exist) $ plakar destination add mypgdst postgres://postgres:secret@db.example.com/myapp \\ create_db=true $ plakar restore -to @mypgdst \u0026lt;snapid\u0026gt; # Restore all databases to a fresh server $ plakar destination add mypgdst postgres://postgres:secret@db.example.com/ $ plakar restore -to @mypgdst \u0026lt;snapid\u0026gt; # Restore, skipping ownership changes (when roles differ on the target) $ plakar destination add mypgdst postgres://postgres:secret@db.example.com/myapp \\ no_owner=true $ plakar restore -to @mypgdst \u0026lt;snapid\u0026gt; Physical backups # There is no dedicated PostgreSQL exporter for physical restores. The snapshot contains plain files, so any file-restore connector works. The simplest option is restoring directly to a local directory:\n$ plakar restore -to ./restored \u0026lt;snapid\u0026gt; $ docker run --rm -v \u0026#34;$PWD/restored/data:/var/lib/postgresql/data\u0026#34; postgres:17 The data directory must not be in use by a running PostgreSQL instance before restoration begins.\nThe manifest # Backing up the data is one thing. Knowing what is in the backup, without having to restore it first, is another.\nEvery snapshot written by this integration includes a /manifest.json record written before the backup data begins. Its purpose is to capture the full state of the cluster as structured metadata at the time the backup was taken.\nThis covers server version, host, cluster system identifier, whether the backup was taken from a hot standby, server configuration parameters, all roles and their memberships, tablespaces, and for every database: its schemas, extensions, tables, views, sequences, columns, indexes, and constraints.\n{ \u0026#34;version\u0026#34;: 2, \u0026#34;server_version\u0026#34;: \u0026#34;PostgreSQL 17.2\u0026#34;, \u0026#34;host\u0026#34;: \u0026#34;db.example.com\u0026#34;, \u0026#34;cluster_system_identifier\u0026#34;: \u0026#34;7489123456789012345\u0026#34;, \u0026#34;in_recovery\u0026#34;: false, \u0026#34;cluster_config\u0026#34;: { \u0026#34;wal_level\u0026#34;: \u0026#34;replica\u0026#34;, \u0026#34;max_connections\u0026#34;: 100, \u0026#34;data_checksums\u0026#34;: true, \u0026#34;archive_mode\u0026#34;: \u0026#34;on\u0026#34;, \u0026#34;archive_command_set\u0026#34;: true }, \u0026#34;roles\u0026#34;: [...], \u0026#34;tablespaces\u0026#34;: [...], \u0026#34;databases\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;myapp\u0026#34;, \u0026#34;schemas\u0026#34;: [...], \u0026#34;relations\u0026#34;: [ { \u0026#34;schema\u0026#34;: \u0026#34;public\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;users\u0026#34;, \u0026#34;kind\u0026#34;: \u0026#34;r\u0026#34;, \u0026#34;row_estimate\u0026#34;: 42381, \u0026#34;live_row_estimate\u0026#34;: 41903, \u0026#34;has_primary_key\u0026#34;: true, \u0026#34;columns\u0026#34;: [\u0026#34;id\u0026#34;, \u0026#34;email\u0026#34;, \u0026#34;created_at\u0026#34;], \u0026#34;indexes\u0026#34;: [\u0026#34;users_pkey\u0026#34;, \u0026#34;users_email_idx\u0026#34;] } ] } ] } Why this matters # Right now, the manifest already lets you inspect the structure of a backup without restoring it, or track how a schema evolved between two snapshots.\nBut this is just the beginning.\nThe plan is for the Plakar UI to consume this manifest and give you a rich view of what is inside each snapshot: browse databases, drill into schemas and tables, see column types, get a picture of the data that was captured, all without touching the actual dump or spinning up a PostgreSQL server.\nDocumentation and options # Both connectors already support many options for backup and restore. They are all documented in the PlakarKorp/integration-postgresql repository.\nWhat comes next # Plakar is moving fast and keeps delivering new features.\nOne of the things coming down the road is point-in-time recovery support. Once that lands, this integration will be able to take advantage of it: physical backups paired with WAL archiving will allow restoring a cluster to any transaction, not just the moment the backup was taken. This is the kind of capability that used to require dedicated tooling and a fair amount of operational knowledge to set up. With Plakar, the goal is to make it as simple as everything else.\nCall for testers # This integration is new and I would love to get feedback from people running it against real databases.\nIf you want to give it a try on your existing PostgreSQL setup and share what you find, come join us on Discord. Whether it works perfectly or something breaks, I want to hear about it!\n","date":"3 April 2026","externalUrl":null,"permalink":"/posts/2026-04-03/backing-up-postgresql-with-plakar/","section":"Plakar Blog","summary":"We built a PostgreSQL integration for Plakar that covers both logical backups (pg_dump / pg_dumpall) and physical backups (pg_basebackup), making database backups as straightforward as any other Plakar backup: no scripts, no glue code.","title":"Backing up PostgreSQL with Plakar","type":"posts"},{"content":"","date":"3 April 2026","externalUrl":null,"permalink":"/tags/database/","section":"Tags","summary":"","title":"Database","type":"tags"},{"content":"","date":"3 April 2026","externalUrl":null,"permalink":"/tags/integration/","section":"Tags","summary":"","title":"Integration","type":"tags"},{"content":"","date":"3 April 2026","externalUrl":null,"permalink":"/authors/jcastets/","section":"Authors","summary":"","title":"Jcastets","type":"authors"},{"content":"","date":"3 April 2026","externalUrl":null,"permalink":"/tags/postgresql/","section":"Tags","summary":"","title":"Postgresql","type":"tags"},{"content":"","date":"3 April 2026","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"3 April 2026","externalUrl":null,"permalink":"/download/","section":"Download Plakar","summary":"Select a Plakar version to view its download links and integrity checks. This page redirects to the latest release.","title":"Download Plakar","type":"download"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/aws-rds/","section":"Tags","summary":"","title":"AWS RDS","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/cloud-native/","section":"Tags","summary":"","title":"Cloud Native","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/cncf/","section":"Tags","summary":"","title":"Cncf","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/databases/","section":"Tags","summary":"","title":"Databases","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/categories/destination-connector/","section":"Categories","summary":"","title":"Destination Connector","type":"categories"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/disaster-recovery/","section":"Tags","summary":"","title":"Disaster Recovery","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/distributed-systems/","section":"Tags","summary":"","title":"Distributed Systems","type":"tags"},{"content":" Why protecting etcd matters # etcd stores the entire state of a distributed system, e.g. in a Kubernetes cluster, that means every workload, configuration, secret, and policy. Its built-in replication handles partial failures well, but it has limits:\nQuorum loss: If too many nodes fail at once, etcd cannot recover without external intervention. Without a snapshot, the cluster state is gone. Logical corruption: A bad write, a botched upgrade, or a misconfigured operator can corrupt cluster state in ways that replication spreads rather than prevents. No point-in-time recovery: etcd does not natively provide a way to roll back to an earlier known-good state. Without snapshots, there is no recovery point to return to. For any system that relies on etcd, an independent backup is the last line of defense.\nWhat happens when etcd is lost? # In a Kubernetes cluster, losing etcd without a backup means losing everything the API server knows about: deployments, services, secrets, namespaces, RBAC policies, and custom resources. The underlying workloads may still be running, but the cluster cannot manage, schedule, or recover them.\nRebuilding from scratch takes time and risks missing configuration that was never captured in source control. A Plakar snapshot lets you restore the cluster to a known state instead.\nHow Plakar protects etcd # Plakar connects to one or more etcd nodes, takes a consistent snapshot of the cluster, and stores it in a Kloset — encrypted, deduplicated, and independent of the cluster itself.\nSnapshots can be stored on any supported backend: local storage, S3-compatible object storage, SFTP, or cold storage. Because Plakar snapshots are immutable, they remain intact even if the cluster or its storage is compromised.\nRestoring etcd from a Plakar snapshot uses the standard etcdutl recovery workflow. Plakar retrieves the snapshot from the Kloset store and writes it to disk, from where the native etcd tooling takes over.\n","date":"2 April 2026","externalUrl":null,"permalink":"/integrations/etcd/","section":"Plakar Integrations","summary":"","title":"etcd","type":"integrations"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/etcd/","section":"Tags","summary":"","title":"Etcd","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/key-value-store/","section":"Tags","summary":"","title":"Key-Value Store","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/kubernetes/","section":"Tags","summary":"","title":"Kubernetes","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/managed-databases/","section":"Tags","summary":"","title":"Managed Databases","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/mariadb/","section":"Tags","summary":"","title":"MariaDB","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/mysql/","section":"Tags","summary":"","title":"MySQL","type":"tags"},{"content":" Why protecting MySQL and MariaDB data matters # MySQL and MariaDB handle storage-level durability well, but they offer no protection against the most common causes of real data loss:\nAccidental deletion: A DROP TABLE, a bad migration, or a DELETE without a WHERE clause wipes data instantly. Replication ensures every replica reflects the same mistake just as quickly. Corruption: A failed upgrade, a misbehaving plugin, or a storage fault can corrupt a database in ways that are not immediately visible and this can also be replicated across replicas. No rollback: Without snapshots, there is no way to return to an earlier known-good state. By the time a problem is noticed, the damage may already be replicated everywhere. For production databases, a backup stored independently of the database server is not optional.\nWhat happens when a database is compromised? # MySQL and MariaDB access is controlled by user accounts and connection credentials. If those credentials are leaked or permissions are misconfigured:\nTotal loss: An attacker with sufficient privileges can drop databases or truncate tables through standard SQL. Automated scripts can do this in seconds across every database on the server. Ransomware: Malicious actors can exfiltrate data and then delete or encrypt the originals, leaving no clean copy to recover from. No recovery path: Without an independent backup stored outside the database server, there is nothing to restore from. Plakar mitigates these risks by storing snapshots in an isolated Kloset, encrypted end-to-end and independent of the database server itself.\nHow Plakar protects your databases # Plakar integrates with MySQL and MariaDB through their native dump tools (mysqldump and mariadb-dump).\nBoth single-database and full-server backups are supported. A single-database backup captures the schema, data, routines, triggers, and events for one database. A full-server backup captures everything across all databases in a single snapshot.\nBackups can be stored on any supported backend: local storage, S3-compatible object storage, SFTP, or cold storage. Because Plakar snapshots are immutable and end-to-end encrypted, they remain intact even if the database server or its credentials are compromised.\n","date":"2 April 2026","externalUrl":null,"permalink":"/integrations/mysql/","section":"Plakar Integrations","summary":"","title":"MySQL / MariaDB","type":"integrations"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/mysqldump/","section":"Tags","summary":"","title":"Mysqldump","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/on-premise/","section":"Tags","summary":"","title":"On-Premise","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/pg_basebackup/","section":"Tags","summary":"","title":"Pg_basebackup","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/pg_dump/","section":"Tags","summary":"","title":"Pg_dump","type":"tags"},{"content":" Why protecting PostgreSQL data matters # PostgreSQL handles data integrity well at the storage level, but it has no built-in protection against the most common causes of data loss:\nAccidental deletion: A dropped table, a bad migration, or a DELETE without a WHERE clause can wipe critical data instantly. Replication ensures every replica reflects the same mistake. Corruption: A failed upgrade, a misbehaving extension, or a storage fault can corrupt a database in ways that are not immediately visible and this can also be replicated across replicas. No rollback: Without snapshots, there is no way to return to an earlier known-good state. Point-in-time recovery requires WAL archiving to be set up in advance and maintained carefully. For production databases, a backup that lives outside the database itself is not optional.\nWhat happens when a database is compromised? # PostgreSQL access is controlled by roles and connection credentials. If those credentials are leaked or permissions are misconfigured:\nTotal loss: An attacker with sufficient privileges can drop databases or truncate tables through standard SQL. Automated scripts can do this in seconds. Ransomware: Malicious actors can exfiltrate data and then delete or encrypt the originals, leaving no clean copy to recover from. No recovery path: Without an independent backup stored outside the database server, there is nothing to restore from. Plakar mitigates these risks by storing snapshots in an isolated Kloset, encrypted end-to-end and independent of the PostgreSQL server itself.\nHow Plakar protects your PostgreSQL databases # Plakar integrates with PostgreSQL through two independent strategies, each suited to different recovery needs:\nLogical backups use pg_dump and pg_dumpall to produce portable, SQL-level snapshots of individual databases or entire clusters. Logical backups work across PostgreSQL major versions, require no downtime, and support selective restore of individual databases, schemas, or tables. Physical backups use pg_basebackup to capture the entire PostgreSQL data directory as a file-level snapshot. Physical backups are faster to restore and capture everything (all databases, configuration, and WAL) but must be restored with the same PostgreSQL major version. Both strategies benefit from Plakar\u0026rsquo;s encryption, deduplication, and snapshot management. Backups can be stored on any supported backend: local storage, S3-compatible object storage, SFTP, or cold storage.\n","date":"2 April 2026","externalUrl":null,"permalink":"/integrations/postgres/","section":"Plakar Integrations","summary":"","title":"PostgreSQL","type":"integrations"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/categories/source-connector/","section":"Categories","summary":"","title":"Source Connector","type":"categories"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/sql/","section":"Tags","summary":"","title":"SQL","type":"tags"},{"content":" PLAKAR-AGENT(1) General Commands Manual PLAKAR-AGENT(1) NAME plakar-agent \u0026#x2014; Run the Plakar agent\nSYNOPSIS plakar agent [start [-foreground] [-log logfile] [-teardown delay]] plakar agent stop DESCRIPTION The plakar agent start command, which is the default, starts the Plakar agent which will execute subsequent plakar(1) commands on their behalfs for faster processing.\nplakar agent is executed automatically by most plakar(1) commands and terminates by itself when idle for too long, so usually there's no need to manually start it.\nThe options for plakar agent start are as follows:\n-foreground Do not daemonize, run in the foreground and log to standard error. -log logfile Write log output to the given logfile which is created if it does not exist. The default is to log to syslog. -teardown delay Specify the delay after which the idle agent terminate. The delay parameter must be given as a sequence of decimal value, each followed by a time unit (e.g. \u0026#x201C;1m30s\u0026#x201D;). Defaults to 5 seconds. plakar agent stop forces the currently running agent to stop. This is useful when upgrading from an older plakar(1) version were the agent was always running.\nDIAGNOSTICS The plakar-agent utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid parameters, inability to create the repository, or configuration issues. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-agent/","section":"Docs","summary":"Run the Plakar agent","title":"agent","type":"docs"},{"content":" PLAKAR-ARCHIVE(1) General Commands Manual PLAKAR-ARCHIVE(1) NAME plakar-archive \u0026#x2014; Create an archive from a Plakar snapshot\nSYNOPSIS plakar archive [-format type] [-output archive] [-rebase] snapshotID:path DESCRIPTION The plakar archive command creates an archive of the given type from the contents at path of a specified Plakar snapshot, or all the files if no path is given.\nThe options are as follows:\n-format type Specify the archive format. Supported formats are: tar Creates a tar file. tarball Creates a compressed tar.gz file. zip Creates a zip archive. -output pathname Specify the output path for the archive file. If omitted, the archive is created with a default name based on the current date and time. -rebase Strip the leading path from archived files, useful for creating \u0026quot;flat\u0026quot; archives without nested directories. EXAMPLES Create a tarball of the entire snapshot:\n$ plakar archive -output backup.tar.gz -format tarball abc123 Create a zip archive of a specific directory within a snapshot:\n$ plakar archive -output dir.zip -format zip abc123:/var/www Archive with rebasing to remove directory structure:\n$ plakar archive -rebase -format tar abc123 DIAGNOSTICS The plakar-archive utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as unsupported format, missing files, or permission issues. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-archive/","section":"Docs","summary":"Create an archive from a Plakar snapshot","title":"archive","type":"docs"},{"content":" PLAKAR-BACKUP(1) General Commands Manual PLAKAR-BACKUP(1) NAME plakar-backup \u0026#x2014; Create a new snapshot in a Kloset store\nSYNOPSIS plakar backup [-concurrency number] [-force-timestamp timestamp] [-ignore pattern] [-ignore-file file] [-check] [-no-xattr] [-o option=value] [-packfiles path] [-quiet] [-silent] [-tag tag] [-scan] [place] DESCRIPTION The plakar backup command creates a new snapshot of place, or the current directory. Snapshots can be filtered to ignore specific files or directories based on patterns provided through options.\nplace can be either a path, an URI, or a label with the form \u0026#x201C;@name\u0026#x201D; to reference a source connector configured with plakar-source(1).\nThe options are as follows:\n-concurrency number Set the maximum number of parallel tasks for faster processing. Defaults to 8 * CPU count + 1. -force-timestamp timestamp Specify a fixed timestamp (in ISO 8601 or relative human format) to use for the snapshot. Could be used to reimport an existing backup with the same timestamp. -ignore pattern Specify individual gitignore exclusion patterns to ignore files or directories in the backup. This option can be repeated. -ignore-file file Specify a file containing gitignore exclusion patterns, one per line, to ignore files or directories in the backup. -check Perform a full check on the backup after success. -no-xattr Skip extended attributes (xattrs) when creating the backup. -o option=value Can be used to pass extra arguments to the source connector. The given option takes precedence over the configuration file. -quiet Suppress output to standard input, only logging errors and warnings. -packfiles path Path where to put the temporary packfiles instead of building them in memory. If the special value \u0026#x2018;memory\u0026#x2019; is specified then the packfiles are build in memory (the default value) -silent Suppress all output. -tag tag Comma-separated list of tags to apply to the snapshot. -scan Do not write a snapshot; instead, perform a dry run by outputting the list of files and directories that would be included in the backup. Respects all exclude patterns and other options, but makes no changes to the Kloset store. EXAMPLES Create a snapshot of the current directory with two tags:\n$ plakar backup -tag daily-backup,production Ignore files using patterns in a given file:\n$ plakar backup -ignore-file ~/my-ignore-file /var/www or by using patterns specified inline:\n$ plakar backup -ignore \u0026quot;*.tmp\u0026quot; -ignore \u0026quot;*.log\u0026quot; /var/www Pass an option to the importer, in this case to don't traverse mount points:\n$ plakar backup -o dont_traverse_fs=true / DIAGNOSTICS The plakar-backup utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully; a snapshot was created, but some items may have been skipped (for example, files without sufficient permissions). Run plakar-info(1) on the new snapshot to view any errors. \u0026gt;0 An error occurred, such as failure to access the Kloset store or issues with exclusion patterns. SEE ALSO plakar(1), plakar-source(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-backup/","section":"Docs","summary":"Create a new snapshot in a Kloset store","title":"backup","type":"docs"},{"content":" PLAKAR-CAT(1) General Commands Manual PLAKAR-CAT(1) NAME plakar-cat \u0026#x2014; Display file contents from a Plakar snapshot\nSYNOPSIS plakar cat [-decompress] [-highlight] snapshotID:path ... DESCRIPTION The plakar cat command outputs the contents of path within Plakar snapshots to the standard output. It can decompress compressed files and optionally apply syntax highlighting based on the file type.\nThe options are as follows:\n-decompress If set, Plakar attempts to decompress application/gzip files. -highlight Apply syntax highlighting to the output based on the file type. EXAMPLES Display a file's contents from a snapshot:\n$ plakar cat abc123:/etc/passwd Display a file with syntax highlighting:\n$ plakar cat -highlight abc123:/home/op/korpus/driver.sh DIAGNOSTICS The plakar-cat utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as failure to retrieve a file or decompress content. SEE ALSO plakar(1), plakar-backup(1)\nAugust 6, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-cat/","section":"Docs","summary":"Display file contents from a Plakar snapshot","title":"cat","type":"docs"},{"content":" PLAKAR-CHECK(1) General Commands Manual PLAKAR-CHECK(1) NAME plakar-check \u0026#x2014; Check data integrity in a Plakar repository\nSYNOPSIS plakar check [-concurrency number] [-fast] [-no-verify] [-quiet] [snapshotID:path ...] DESCRIPTION The plakar check command verifies the integrity of data in a Plakar repository. It checks the given paths inside the snapshots for consistency and validates file macs to ensure no corruption has occurred, or all the data in the repository if no snapshotID or location flags is given.\nIn addition to the flags described below, plakar check supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-concurrency number Set the maximum number of parallel tasks for faster processing. Defaults to 8 * CPU count + 1. -fast Enable a faster check that skips mac verification. This option performs only structural validation without confirming data integrity. -no-verify Disable signature verification. This option allows to proceed with checking snapshot integrity regardless of an invalid snapshot signature. -quiet Suppress output to standard output, only logging errors and warnings. EXAMPLES Perform a full integrity check on all snapshots:\n$ plakar check Perform a fast check on specific paths of two snapshot:\n$ plakar check -fast abc123:/etc/passwd def456:/var/www DIAGNOSTICS The plakar-check utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully with no integrity issues found. \u0026gt;0 An error occurred, such as corruption detected in a snapshot or failure to check data integrity. SEE ALSO plakar(1), plakar-query(7)\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-check/","section":"Docs","summary":"Check data integrity in a Plakar repository","title":"check","type":"docs"},{"content":" PLAKAR-CLONE(1) General Commands Manual PLAKAR-CLONE(1) NAME plakar-clone \u0026#x2014; Clone a Plakar repository to a new location\nSYNOPSIS plakar clone to path DESCRIPTION The plakar clone command creates a full clone of an existing Plakar repository, including all snapshots, packfiles, and repository states, and saves it at the specified path.\nEXAMPLES Clone a repository to a new location:\nplakar clone to /path/to/new/repository DIAGNOSTICS The plakar-clone utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as failure to access the source repository or to create the target repository. SEE ALSO plakar(1), plakar-create(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-clone/","section":"Docs","summary":"Clone a Plakar repository to a new location","title":"clone","type":"docs"},{"content":" PLAKAR-CREATE(1) General Commands Manual PLAKAR-CREATE(1) NAME plakar-create \u0026#x2014; Create a new Plakar repository\nSYNOPSIS plakar create [-plaintext] DESCRIPTION The plakar create command creates a new Plakar repository at the specified path which defaults to ~/.plakar.\nThe options are as follows:\n-plaintext Disable transparent encryption for the repository. If specified, the repository will not use encryption. ENVIRONMENT PLAKAR_PASSPHRASE Repository encryption password. DIAGNOSTICS The plakar-create utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid parameters, inability to create the repository, or configuration issues. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-create/","section":"Docs","summary":"Create a new Plakar repository","title":"create","type":"docs"},{"content":" PLAKAR-DESTINATION(1) General Commands Manual PLAKAR-DESTINATION(1) NAME plakar-destination \u0026#x2014; Manage Plakar restore destination configuration\nSYNOPSIS plakar destination subcommand ... DESCRIPTION The plakar destination command manages the configuration of destinations where Plakar will restore.\nThe configuration consists in a set of named entries, each of them describing a destination where a restore operation may happen.\nA destination is defined by at least a location, specifying the exporter to use, and some exporter-specific parameters.\nThe subcommands are as follows:\nadd name location [option=value ...] Create a new destination entry identified by name with the specified location describing the exporter to use. Additional exporter options can be set by adding option=value parameters. check name Check wether the exporter for the destination identified by name is properly configured. import [-config location] [-overwrite] [-rclone] [sections ...] Import destination configurations from various sources including files, piped input, or rclone configurations. By default, reads from stdin, allowing for piped input from other commands like plakar source show.\nThe -config option specifies a file or URL to read the configuration from.\nThe -overwrite option allows overwriting existing destination configurations with the same names.\nThe -rclone option treats the input as an rclone configuration, useful for importing rclone remotes as Plakar destinations.\nSpecific sections can be imported by listing their names.\nSections can be renamed during import by appending :newname.\nFor detailed examples and usage patterns, see the https://plakar.io/docs/v1.0.6/guides/importing-configurations/ Importing Configurations guide.\nping name Try to open the destination identified by name to make sure it is reachable. rm name Remove the destination identified by name from the configuration. set name [option=value ...] Set the option to value for the destination identified by name. Multiple option/value pairs can be specified. show [-secrets] [name ...] Display the current destinations configuration. If -secrets is specified, sensitive information such as passwords or tokens will be shown. unset name [option ...] Remove the option for the destination entry identified by name. EXIT STATUS The plakar-destination utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nSeptember 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-destination/","section":"Docs","summary":"Manage Plakar restore destination configuration","title":"destination","type":"docs"},{"content":" PLAKAR-DIAG(1) General Commands Manual PLAKAR-DIAG(1) NAME plakar-diag \u0026#x2014; Display detailed information about Plakar internal structures\nSYNOPSIS plakar diag [contenttype | errors | locks | object | packfile | snapshot | state | vfs | xattr] DESCRIPTION The plakar diag command provides detailed information about various internal data structures. The type of information displayed depends on the specified argument. Without any arguments, display information about the repository.\nThe sub-commands are as follows:\ncontenttype snapshotID:path \u0026#x00A0; errors snapshotID Display the list of errors in the given snapshot. locks Display the list of locks currently held on the repository. object objectID Display information about a specific object, including its mac, type, tags, and associated data chunks. packfile packfileID Show details of packfiles, including entries and macs, which store object data within the repository. snapshot snapshotID Show detailed information about a specific snapshot, including its metadata, directory and file count, and size. state List or describe the states in the repository. vfs snapshotID:path Show filesystem (VFS) details for a specific path within a snapshot, listing directory or file attributes, including permissions, ownership, and custom metadata. xattr snapshotID:path \u0026#x00A0; EXAMPLES Show repository information:\n$ plakar diag Show detailed information for a snapshot:\n$ plakar diag snapshot abc123 List all states in the repository:\n$ plakar diag state Display a specific object within a snapshot:\n$ plakar diag object 1234567890abcdef Display filesystem details for a path within a snapshot:\n$ plakar diag vfs abc123:/etc/passwd DIAGNOSTICS The plakar-diag utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as an invalid snapshot or object ID, or a failure to retrieve the requested data. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-diag/","section":"Docs","summary":"Display detailed information about Plakar internal structures","title":"diag","type":"docs"},{"content":" PLAKAR-DIFF(1) General Commands Manual PLAKAR-DIFF(1) NAME plakar-diff \u0026#x2014; Show differences between files in a Plakar snapshots\nSYNOPSIS plakar diff [-highlight] [-recursive] snapshotID1[:path1] snapshotID2[:path2] DESCRIPTION The plakar diff command compares two Plakar snapshots, optionally restricting to specific files within them. If only snapshot IDs are provided, it compares the root directories of each snapshot. If file paths are specified, the command compares the individual files. The diff output is shown in unified diff format, with an option to highlight differences.\nThe options are as follows:\n-highlight Apply syntax highlighting to the diff output for readability. -recursive When comparing directories, recursively compare all subdirectories. EXAMPLES Compare root directories of two snapshots:\n$ plakar diff abc123 def456 Compare across snapshots with highlighting: /etc/passwd\n$ plakar diff -highlight abc123:/etc/passwd def456:/etc/passwd DIAGNOSTICS The plakar-diff utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid snapshot IDs, missing files, or an unsupported file type. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-diff/","section":"Docs","summary":"Show differences between files in a Plakar snapshots","title":"diff","type":"docs"},{"content":" PLAKAR-DIGEST(1) General Commands Manual PLAKAR-DIGEST(1) NAME plakar-digest \u0026#x2014; Compute digests for files in a Plakar snapshot\nSYNOPSIS plakar digest [-hashing algorithm] snapshotID[:path] [...] DESCRIPTION The plakar digest command computes and displays digests for specified path in a the given snapshotID. Multiple snapshotID and path may be given. By default, the command computes the digest by reading the file contents.\nThe options are as follows:\n-hashing algorithm Use algorithm to compute the digest. Defaults to SHA256. EXAMPLES Compute the digest of a file within a snapshot:\n$ plakar digest abc123:/etc/passwd Use BLAKE3 as the digest algorithm:\n$ plakar digest -hashing BLAKE3 abc123:/etc/netstart DIAGNOSTICS The plakar-digest utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as failure to retrieve a file digest or invalid snapshot ID. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-digest/","section":"Docs","summary":"Compute digests for files in a Plakar snapshot","title":"digest","type":"docs"},{"content":" PLAKAR-DUP(1) General Commands Manual PLAKAR-DUP(1) NAME plakar-dup \u0026#x2014; Duplicates an existing snapshot with a different ID\nSYNOPSIS plakar dup DESCRIPTION The plakar dup command creates a duplicate of an existing snapshot with a new snapshot ID. The new snapshot is an exact copy of the original, including all files and metadata.\nEXAMPLES Create a duplicate of a snapshot with ID \u0026quot;abc123\u0026quot;:\n$ plakar dup abc123 DIAGNOSTICS The plakar-dup utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as failure to retrieve existing snapshot or invalid snapshot ID. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-dup/","section":"Docs","summary":"Duplicates an existing snapshot with a different ID","title":"dup","type":"docs"},{"content":" PLAKAR-INFO(1) General Commands Manual PLAKAR-INFO(1) NAME plakar-info \u0026#x2014; Display detailed information about internal structures\nSYNOPSIS plakar info [-errors] [snapshot] DESCRIPTION The plakar info command provides detailed information about a Plakar repository and snapshots. The type of information displayed depends on the specified argument. Without any arguments, display information about the repository.\nThe options are as follows:\n-errors Show errors within the specified snapshot. EXAMPLES Show repository information:\n$ plakar info Show detailed information for a snapshot:\n$ plakar info abc123 Show errors within a snapshot:\n$ plakar info -errors abc123 DIAGNOSTICS The plakar-info utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as an invalid snapshot or object ID, or a failure to retrieve the requested data. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-info/","section":"Docs","summary":"Display detailed information about internal structures","title":"info","type":"docs"},{"content":" PLAKAR-LOCATE(1) General Commands Manual PLAKAR-LOCATE(1) NAME plakar-locate \u0026#x2014; Find filenames in a Plakar snapshot\nSYNOPSIS plakar locate [-snapshot snapshotID] patterns ... DESCRIPTION The plakar locate command search snapshots to find file names matching any of the given patterns and prints the abbreviated snapshot ID and the full path of the matched files. Matching works according to the shell globbing rules.\nIf no -snapshot nor location flags are given, plakar locate will search in all snapshots.\nIn addition to the flags described below, plakar locate supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-snapshot snapshotID Limit the search to the given snapshot. EXAMPLES Search for files ending in \u0026#x201C;wd\u0026#x201D;:\n$ plakar locate '*wd' abc123:/etc/master.passwd abc123:/etc/passwd DIAGNOSTICS The plakar-locate utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid parameters, inability to create the repository, or configuration issues. SEE ALSO plakar(1), plakar-backup(1), plakar-query(7)\nCAVEATS The patterns may have to be quoted to avoid the shell attempting to expand them.\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-locate/","section":"Docs","summary":"Find filenames in a Plakar snapshot","title":"locate","type":"docs"},{"content":" PLAKAR-LOGIN(1) General Commands Manual PLAKAR-LOGIN(1) NAME plakar-login \u0026#x2014; Authenticate to Plakar services\nSYNOPSIS plakar login [-email email] [-github] [-no-spawn] [-status] DESCRIPTION The plakar login command initiates an authentication flow with the Plakar platform. Login is optional for most plakar commands but required to enable certain services, such as alerting. See also plakar-service(1).\nOnly one authentication method may be specified per invocation: the -email and -github options are mutually exclusive. If neither is provided, -github is assumed.\nThe options are as follows:\n-email email Send a login link to the specified email address. Clicking the link in the received email will authenticate plakar. -github Use GitHub OAuth to authenticate. A browser will be spawned to initiate the OAuth flow unless -no-spawn is specified. -no-spawn Do not automatically open a browser window for authentication flows. -status Check wether the user is currently logged in. This option cannot be used with any other options. EXAMPLES Start a login via email:\n$ plakar login -email user@example.com Authenticate via GitHub (default, opens browser):\n$ plakar login SEE ALSO plakar(1), plakar-logout(1), plakar-service(1)\nJuly 8, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-login/","section":"Docs","summary":"Authenticate to Plakar services","title":"login","type":"docs"},{"content":" PLAKAR-LOGOUT(1) General Commands Manual PLAKAR-LOGOUT(1) NAME plakar-logout \u0026#x2014; Log out from Plakar services\nSYNOPSIS plakar logout DESCRIPTION The plakar logout command logs out an authenticated session with the Plakar platform.\nEXAMPLES Log out from the current session:\n$ plakar logout SEE ALSO plakar(1), plakar-login(1), plakar-service(1)\nJuly 8, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-logout/","section":"Docs","summary":"Log out from Plakar services","title":"logout","type":"docs"},{"content":" PLAKAR-LS(1) General Commands Manual PLAKAR-LS(1) NAME plakar-ls \u0026#x2014; List snapshots and their contents in a Plakar repository\nSYNOPSIS plakar ls [-uuid] [-recursive] [snapshotID:path] DESCRIPTION The plakar ls command lists snapshots stored in a Plakar repository, and optionally displays the contents of path in a specified snapshot.\nIn addition to the flags described below, plakar ls supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-uuid Display the full UUID for each snapshot instead of the shorter snapshot ID. -recursive List directory contents recursively when exploring snapshot contents. EXAMPLES List all snapshots with their short IDs:\n$ plakar ls List all snapshots with UUIDs instead of short IDs:\n$ plakar ls -uuid List snapshots with a specific tag:\n$ plakar ls -tag daily-backup List contents of a specific snapshot:\n$ plakar ls abc123 Recursively list contents of a specific snapshot:\n$ plakar ls -recursive abc123:/etc DIAGNOSTICS The plakar-ls utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as failure to retrieve snapshot information or invalid snapshot ID. SEE ALSO plakar(1), plakar-query(7)\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-ls/","section":"Docs","summary":"List snapshots and their contents in a Plakar repository","title":"ls","type":"docs"},{"content":" PLAKAR-MAINTENANCE(1) General Commands Manual PLAKAR-MAINTENANCE(1) NAME plakar-maintenance \u0026#x2014; Remove unused data from a Plakar repository\nSYNOPSIS plakar maintenance DESCRIPTION The plakar maintenance command removes unused blobs, objects, and chunks from a Plakar repository to reduce storage space. It identifies unreferenced data and reorganizes packfiles to ensure only active snapshots and their dependencies are retained. The maintenance process updates snapshot indexes to reflect these changes.\nDIAGNOSTICS The plakar-maintenance utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred during maintenance, such as failure to update indexes or remove data. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-maintenance/","section":"Docs","summary":"Remove unused data from a Plakar repository","title":"maintenance","type":"docs"},{"content":" PLAKAR-MOUNT(1) General Commands Manual PLAKAR-MOUNT(1) NAME plakar-mount \u0026#x2014; Mount Plakar snapshots as read-only filesystem\nSYNOPSIS plakar mount mountpoint DESCRIPTION The plakar mount command mounts a Plakar repository snapshot as a read-only filesystem at the specified mountpoint. This allows users to access snapshot contents as if they were part of the local file system, providing easy browsing and retrieval of files without needing to explicitly restore them. This command may not work on all Operating Systems.\nEXAMPLES Mount a snapshot to the specified directory:\n$ plakar mount ~/mnt DIAGNOSTICS The plakar-mount utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as an invalid mountpoint or failure during the mounting process. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-mount/","section":"Docs","summary":"Mount Plakar snapshots as read-only filesystem","title":"mount","type":"docs"},{"content":" PLAKAR-PKG-ADD(1) General Commands Manual PLAKAR-PKG-ADD(1) NAME plakar-pkg-add \u0026#x2014; Install Plakar plugins\nSYNOPSIS plakar pkg add plugin ... DESCRIPTION The plakar pkg add command adds a local or a remote plugin.\nIf plugin is an absolute path, or if it starts with \u0026#x2018;./\u0026#x2019;, then it is considered a path to a local plugin file, otherwise it is downloaded from the Plakar plugin server. In the latter case, the user must be logged in via the plakar-login(1) command.\nFILES ~/.cache/plakar/plugins/ Plugin cache directory. Respects XDG_CACHE_HOME if set. ~/.local/share/plakar/plugins Plugin directory. Respects XDG_DATA_HOME if set. SEE ALSO plakar-login(1), plakar-pkg-build(1), plakar-pkg-create(1), plakar-pkg-rm(1), plakar-pkg-show(1)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-pkg-add/","section":"Docs","summary":"Install Plakar plugins","title":"pkg-add","type":"docs"},{"content":" PLAKAR-PKG-BUILD(1) General Commands Manual PLAKAR-PKG-BUILD(1) NAME plakar-pkg-build \u0026#x2014; Build Plakar plugins from source\nSYNOPSIS plakar pkg build recipe.yaml DESCRIPTION The plakar pkg build fetches the sources and builds the plugin as specified in the given plakar-pkg-recipe.yaml(5). If it builds successfully, the resulting plugin will be created in the current working directory.\nFILES ~/.cache/plakar/plugins/ Plugin cache directory. Respects XDG_CACHE_HOME if set. ~/.local/share/plakar/plugins Plugin directory. Respects XDG_DATA_HOME if set. SEE ALSO plakar-pkg-add(1), plakar-pkg-create(1), plakar-pkg-rm(1), plakar-pkg-show(1), plakar-pkg-recipe.yaml(5)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-pkg-build/","section":"Docs","summary":"Build Plakar plugins from source","title":"pkg-build","type":"docs"},{"content":" PLAKAR-PKG-CREATE(1) General Commands Manual PLAKAR-PKG-CREATE(1) NAME plakar-pkg-create \u0026#x2014; Package a plugin\nSYNOPSIS plakar pkg build manifest.yaml DESCRIPTION The plakar pkg create assembles a plugin using the provided plakar-pkg-manifest.yaml(5).\nAll the files needed for the plugin need to be already available, i.e. executables must be already be built.\nAll external files must reside in the same directory as the manifest.yaml or in subdirectories.\nSEE ALSO plakar-pkg-add(1), plakar-pkg-build(1), plakar-pkg-rm(1), plakar-pkg-show(1), plakar-pkg-manifest.yaml(5)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-pkg-create/","section":"Docs","summary":"Package a plugin","title":"pkg-create","type":"docs"},{"content":" PLAKAR-PKG-MANIFEST.YAML(5) File Formats Manual PLAKAR-PKG-MANIFEST.YAML(5) NAME manifest.yaml \u0026#x2014; Manifest for plugin assemblation\nDESCRIPTION The manifest.yaml file format describes how to package a plugin. No build or compilation is done, so all executables and other files must be prepared beforehand.\nmanifest.yaml must have a top-level YAML object with the following fields:\nname The name of the plugins display_name The displayed name in the UI. description A short description of the connectors. homepage A link to the homepage. license The license of the connectors. tag A YAML array of strings for tags that describe the connectors. api_version The API version supported. version The plugin version, which doubles as the git tag as well. It must follow semantic versioning and have a \u0026#x2018;v\u0026#x2019; prefix, e.g. \u0026#x2018;v1.2.3\u0026#x2019;. connectors A YAML array of objects with the following properties: type The connector type, one of importer, exporter, or store. protocols An array of YAML strings containing all the protocols that the connector supports. location_flags An optional array of YAML strings describing some properties of the connector. These properties are: localfs Whether paths given to this connector have to be made absolute. file Whether this store backend handles a Kloset in a sigle file, for e.g. a ptar file. executable Path to the plugin executable. extra_file An optional array of YAML string. These are extra files that need to be included in the package. EXAMPLES A sample manifest for the \u0026#x201C;fs\u0026#x201D; plugin is as follows:\n# manifest.yaml name: fs display_name: file system connector description: file storage but as external plugin homepage: https://github.com/PlakarKorp/integration-fs license: ISC tags: [ fs, filesystem, \u0026quot;local files\u0026quot; ] api_version: 1.0.0 version: 1.0.0 connectors: - type: importer executable: fs-importer protocols: [fs] - type: exporter executable: fs-exporter protocols: [fs] - type: storage executable: fs-store protocols: [fs] SEE ALSO plakar-pkg-create(1)\nJuly 20, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-pkg-manifest.yaml/","section":"Docs","summary":"Manifest for plugin assemblation","title":"pkg-manifest.yaml","type":"docs"},{"content":" PLAKAR-PKG-RECIPE.YAML(5) File Formats Manual PLAKAR-PKG-RECIPE.YAML(5) NAME recipe.yaml \u0026#x2014; Recipe to build Plakar plugins from source\nDESCRIPTION The recipe.yaml file format describes how to fetch and build Plakar plugins. It must have a top-level YAML object with the following fields:\nname The name of the plugins version The plugin version, which doubles as the git tag as well. It must follow semantic versioning and have a \u0026#x2018;v\u0026#x2019; prefix, e.g. \u0026#x2018;v1.2.3\u0026#x2019;. repository URL to the git repository holding the plugin. EXAMPLES A sample recipe to build the \u0026#x201C;fs\u0026#x201D; plugin is as follows:\n# recipe.yaml name: fs version: v1.0.0 repository: https://github.com/PlakarKorp/integrations-fs SEE ALSO plakar-pkg-build(1)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-pkg-recipe.yaml/","section":"Docs","summary":"Recipe to build Plakar plugins from source","title":"pkg-recipe.yaml","type":"docs"},{"content":" PLAKAR-PKG-RM(1) General Commands Manual PLAKAR-PKG-RM(1) NAME plakar-pkg-rm \u0026#x2014; Uninstall Plakar plugins\nSYNOPSIS plakar pkg rm plugin ... DESCRIPTION The plakar pkg rm command removes plugins that have been previously installed with plakar-pkg-add(1) command.\nThe list of plugins can be obtained with plakar-pkg-show(1).\nEXAMPLES Removing a plugin:\n$ plakar pkg show epic-v1.2.3 $ plakar pkg rm epic-v1.2.3 SEE ALSO plakar-pkg-add(1), plakar-pkg-build(1), plakar-pkg-create(1), plakar-pkg-show(1)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-pkg-rm/","section":"Docs","summary":"Uninstall Plakar plugins","title":"pkg-rm","type":"docs"},{"content":" PLAKAR-PKG-SHOW(1) General Commands Manual PLAKAR-PKG-SHOW(1) NAME plakar-pkg-show \u0026#x2014; Show installed Plakar plugins\nSYNOPSIS plakar pkg show [-available] [-long] DESCRIPTION The plakar pkg show command shows the currently installed plugins.\nThe options are as follows:\n-available Instead of installed packages, show the set of prebuilt packages available for this system. -long Show the full package name. FILES ~/.cache/plakar/plugins/ Plugin cache directory. Respects XDG_CACHE_HOME if set. ~/.local/share/plakar/plugins Plugin directory. Respects XDG_DATA_HOME if set. SEE ALSO plakar-pkg-add(1), plakar-pkg-build(1), plakar-pkg-create(1), plakar-pkg-rm(1)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-pkg-show/","section":"Docs","summary":"Show installed Plakar plugins","title":"pkg-show","type":"docs"},{"content":" PLAKAR(1) General Commands Manual PLAKAR(1) NAME plakar \u0026#x2014; effortless backups\nSYNOPSIS plakar [-config dir] [-cpu number] [-keyfile path] [-no-agent] [-quiet] [-trace subsystems] [at kloset] subcommand ... DESCRIPTION plakar is a tool to create distributed, versioned backups with compression, encryption, and data deduplication.\nBy default, plakar operates on the Kloset store at ~/.plakar. This can be changed either by using the at option.\nThe following options are available:\n-config dir Specify an alternate configuration directory. Defaults to ~/.config/plakar. -cpu number Limit the number of parallel workers plakar uses to number. By default it's the number of online CPUs. -keyfile path Read the passphrase from the key file at path instead of prompting. Overrides the PLAKAR_PASSPHRASE environment variable. -no-agent Run without attempting to connect to the agent. -quiet Disable all output except for errors. -trace subsystems Display trace logs. subsystems is a comma-separated series of keywords to enable the trace logs for different subsystems: all, trace, repository, snapshot and server. at kloset Operates on the given kloset store. It could be a path, an URI, or a label in the form \u0026#x201C;@name\u0026#x201D; to reference a configuration created with plakar-store(1). The following commands are available:\nagent Run the plakar agent and configure scheduled tasks, documented in plakar-agent(1). archive Create an archive from a Kloset snapshot, documented in plakar-archive(1). backup Create a new Kloset snapshot, documented in plakar-backup(1). cat Display file contents from a Kloset snapshot, documented in plakar-cat(1). check Check data integrity in a Kloset store, documented in plakar-check(1). clone Clone a Kloset store to a new location, documented in plakar-clone(1). create Create a new Kloset store, documented in plakar-create(1). destination Manage configurations for the destination connectors, documented in plakar-destination(1). diff Show differences between files in a Kloset snapshot, documented in plakar-diff(1). digest Compute digests for files in a Kloset snapshot, documented in plakar-digest(1). help Show this manpage and the ones for the subcommands. info Display detailed information about internal structures, documented in plakar-info(1). locate Find filenames in a Kloset snapshot, documented in plakar-locate(1). ls List snapshots and their contents in a Kloset store, documented in plakar-ls(1). maintenance Remove unused data from a Kloset store, documented in plakar-maintenance(1). mount Mount Kloset snapshots as a read-only filesystem, documented in plakar-mount(1). ptar Create a .ptar archive, documented in plakar-ptar(1). pkg show List installed plugins, documented in plakar-pkg-show(1). pkg add Install a plugin, documented in plakar-pkg-add(1). pkg build Build a plugin from source, documented in plakar-pkg-build(1). pkg create Package a plugin, documented in plakar-pkg-create(1). pkg rm Uninstall a plugin, documented in plakar-pkg-rm(1). restore Restore files from a Kloset snapshot, documented in plakar-restore(1). rm Remove snapshots from a Kloset store, documented in plakar-rm(1). server Start a Plakar server, documented in plakar-server(1). source Manage configurations for the source connectors, documented in plakar-source(1). store Manage configurations for storage connectors, documented in plakar-store(1). sync Synchronize snapshots between Kloset stores, documented in plakar-sync(1). ui Serve the Plakar web user interface, documented in plakar-ui(1). version Display the current Plakar version, documented in plakar-version(1). ENVIRONMENT PLAKAR_PASSPHRASE Passphrase to unlock the Kloset store; overrides the one from the configuration. If set, plakar won't prompt to unlock. The option keyfile overrides this environment variable. PLAKAR_REPOSITORY Reference to the Kloset store. FILES ~/.cache/plakar and ~/.cache/plakar-agentless Plakar cache directories. ~/.config/plakar/destinations.yml Restore destinations configuration. ~/.config/plakar/sources.yml Backup sources configuration. ~/.config/plakar/stores.yml Kloset stores configuration. ~/.plakar Default Kloset store location. EXAMPLES Create an encrypted Kloset store at the default location:\n$ plakar create Create an encrypted Kloset store on AWS S3:\n$ plakar store add mys3bucket \\ location=s3://s3.eu-west-3.amazonaws.com/backups \\ access_key=\u0026quot;access_key\u0026quot; \\ secret_access_key=\u0026quot;secret_key\u0026quot; $ plakar at @mys3bucket create Create a snapshot of the current directory on the @mys3bucket Kloset store:\n$ plakar at @mys3bucket backup List the snapshots of the default Kloset store:\n$ plakar ls Restore the file \u0026#x201C;notes.md\u0026#x201D; in the current directory from the snapshot with id \u0026#x201C;abcd\u0026#x201D;:\n$ plakar restore -to . abcd:notes.md Remove snapshots older than 30 days:\n$ plakar rm -before 30d September 9, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar/","section":"Docs","summary":"effortless backups","title":"plakar","type":"docs"},{"content":" PLAKAR-POLICY(1) General Commands Manual PLAKAR-POLICY(1) NAME plakar-policy \u0026#x2014; Manage Plakar retention policies\nSYNOPSIS plakar policy subcommand ... DESCRIPTION The plakar policy command manages the retention policies for plakar-prune(1).\nThe configuration consists in a set of named entries, each of them describing a retention policy.\nThe subcommands are as follows:\nadd name [option=value ...] Create a new source entry identified by name. Additional parameters can be set by adding option=value parameters. rm name Remove the policy identified by name from the configuration. set name [option=value ...] Set the option to value for the source identified by name. Multiple option/value pairs can be specified. show [-ini] [-json] [-yaml] [name ...] Display the current sources configuration. -ini, -json and -yaml control the output format, which is YAML by default. unset name [option ...] Remove the option for the policy identified by name. The available options as described in plakar-query(7): each option corresponds the similarly named flag.\nEXIT STATUS The plakar-policy utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Create a policy \u0026#x2018;weekly\u0026#x2019; that keeps one backup per week and discards backups older than three months:\n$ plakar policy add weekly $ plakar policy set weekly since='3 months' $ plakar policy set weekly per-week=1 Prune snapshots accordingly to the \u0026#x2018;weekly\u0026#x2019; policy:\n$ plakar prune -policy weekly SEE ALSO plakar(1), plakar-prune(1)\nSeptember 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-policy/","section":"Docs","summary":"Manage Plakar retention policies","title":"policy","type":"docs"},{"content":" PLAKAR-PRUNE(1) General Commands Manual PLAKAR-PRUNE(1) NAME plakar-prune \u0026#x2014; Prune snapshots according to a policy\nSYNOPSIS plakar prune [-apply] [-policy name] [snapshotID ...] DESCRIPTION The plakar prune command deletes snapshots from a Plakar repository. Snapshots can be filtered for deletion by age, by tag, or by specifying the snapshot IDs to remove. If no snapshotID are provided, either -older or -tag must be specified to filter the snapshots to delete.\nplakar prune supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe arguments are as follows:\n-apply Delete matching snapshot. The default is to just show the snapshot that would be removed but not actually execute the operation. -policy name Use the given policy. See plakar-policy(1) for how policies are managed. EXAMPLES Remove a specific snapshot by ID:\n$ plakar prune abc123 Remove snapshots older than 30 days:\n$ plakar prune -days 30d Remove snapshots with a specific tag:\n$ plakar prune -tag daily-backup Remove snapshots older than 1 year with a specific tag:\n$ plakar prune -years 1 -tag daily-backup DIAGNOSTICS The plakar-prune utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid date format or failure to delete a snapshot. SEE ALSO plakar(1), plakar-backup(1), plakar-policy(1), plakar-query(7)\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-prune/","section":"Docs","summary":"Prune snapshots according to a policy","title":"prune","type":"docs"},{"content":" PLAKAR-PTAR(1) General Commands Manual PLAKAR-PTAR(1) NAME plakar-ptar \u0026#x2014; generate a self-contained Kloset archive (.ptar)\nSYNOPSIS plakar ptar [-plaintext] [-overwrite] [-k location] -o file.ptar [path ...] DESCRIPTION The plakar ptar command creates a single portable archive (a \u0026#x2018;.ptar\u0026#x2019; file) that bundles one or more existing Plakar repositories (\u0026#x201C;klosets\u0026#x201D;) and/or arbitrary filesystem paths into a self-contained package. The resulting archive preserves repository metadata, snapshots and data chunks, and is compressed and encrypted for secure transport or off-site storage.\nAt least one data source must be supplied: either one or more -k or -kloset options naming remote or local kloset repositories, and/or one or more path arguments identifying files or directories to back up. The destination archive name is mandatory and supplied with -o.\nUnless the -overwrite flag is given, plakar ptar refuses to replace an existing archive.\nThe options are as follows:\n-plaintext Disable transparent encryption of the archive. If omitted, plakar ptar encrypts repository data using a key derived from the passphrase specified via PLAKAR_PASSPHRASE or prompted interactively. -overwrite Overwrite an existing .ptar file at the destination path. -k location, -kloset location Add a kloset repository to include in the archive. May be specified multiple times to bundle several repositories. -o file.ptar Path of the archive to create. This option is required. path ... Zero or more filesystem paths to back up directly into the archive. ENVIRONMENT PLAKAR_PASSPHRASE Passphrase used to derive the encryption key when encryption is enabled. DIAGNOSTICS The plakar-ptar utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred (invalid arguments, existing archive without -overwrite, hashing algorithm unknown, repository access failure, I/O errors, etc.). SEE ALSO plakar(1), plakar-backup(1), plakar-create(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-ptar/","section":"Docs","summary":"generate a self-contained Kloset archive (.ptar)","title":"ptar","type":"docs"},{"content":" PLAKAR-QUERY(7) Miscellaneous Information Manual PLAKAR-QUERY(7) NAME plakar-query \u0026#x2014; query flags shared among many Plakar subcommands\nDESCRIPTION What follows is a set of command line arguments that many plakar(1) subcommands provide to filter snapshots.\nThere are two kind of flags:\nmatchers These allow to select snapshots. If combined, the result is the union of the various matchers. filters These instead filter the output of the matchers by yielding snapshots matching only certain criterias. If combined, the result is the intersection of the various filters. If no matcher is given, all the snapshots are implicitly selected, and then filtered according to the given filters, if any.\nThe matchers are divided into:\nmatchers that select snapshots from the last n unit of time: -minutes n \u0026#x00A0; -hours n \u0026#x00A0; -days n \u0026#x00A0; -weeks n \u0026#x00A0; -months n \u0026#x00A0; -years n \u0026#x00A0; Or that selects snapshots that were done during the last n days of the week:\n-mondays n \u0026#x00A0; -thuesdays n \u0026#x00A0; -wednesdays n \u0026#x00A0; -thursdays n \u0026#x00A0; -fridays n \u0026#x00A0; -saturdays n \u0026#x00A0; -sundays n \u0026#x00A0; matchers that select at most n snapshots per time period: -per-minute n \u0026#x00A0; -per-hour n \u0026#x00A0; -per-day n \u0026#x00A0; -per-week n \u0026#x00A0; -per-month n \u0026#x00A0; -per-year n \u0026#x00A0; -per-monday n \u0026#x00A0; -per-thuesday n \u0026#x00A0; -per-wednesday n \u0026#x00A0; -per-thursday n \u0026#x00A0; -per-friday n \u0026#x00A0; -per-saturday n \u0026#x00A0; -per-sunday n \u0026#x00A0; The filters are:\n-before date Select snapshots older than given date. The date may be in RFC3339 format, as \u0026#x201C;YYYY-mm-DD HH:MM\u0026#x201D;, \u0026#x201C;YYYY-mm-DD HH:MM:SS\u0026#x201D;, \u0026#x201C;YYYY-mm-DD\u0026#x201D;, or \u0026#x201C;YYYY/mm/DD\u0026#x201D; where YYYY is a year, mm a month, DD a day, HH a hour in 24 hour format number, MM minutes and SS the number of seconds. Alternatively, human-style intervals like \u0026#x201C;half an hour\u0026#x201D;, \u0026#x201C;a month\u0026#x201D; or \u0026#x201C;2h30m\u0026#x201D; are also accepted.\n-category name Select snapshot whose category is name. -environment name Select snapshot whose environment is name. -job name Select snapshot whose job is name. -latest Select only the latest snapshot. -name name Select snapshots whose name is name. -perimeter name Select snapshots whose perimeter is name. -root path Select snapshots whose root directory is path. May be specified multiple time, snapshots are selected if any of the given paths matches. -since date Select snapshots newer than the given date. The accepted format is the same as -before. -tag name Select snapshots tagged with name. May be specified multiple times, and multiple tags may be given at the same time if comma-separated. If a tag name is prefixed with an exclamation mark \u0026#x2018;!\u0026#x2019;, the matching is inverted and the snapshot is ignored if it contains said tag. November 28, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-query/","section":"Docs","summary":"query flags shared among many Plakar subcommands","title":"query","type":"docs"},{"content":" PLAKAR-RESTORE(1) General Commands Manual PLAKAR-RESTORE(1) NAME plakar-restore \u0026#x2014; Restore files from a Plakar snapshot\nSYNOPSIS plakar restore [-name name] [-category category] [-environment environment] [-perimeter perimeter] [-job job] [-tag tag] [-latest] [-before date] [-since date] [-concurrency number] [-quiet] [-to directory] [-skip-permissions] [snapshotID:path ...] DESCRIPTION The plakar restore command is used to restore files and directories at path from a specified Plakar snapshot to the local file system. If path is omitted, then all the files in the specified snapshotID are restored. If no snapshotID is provided, the command attempts to restore the current working directory from the last matching snapshot.\nThe options are as follows:\n-name string Only apply command to snapshots that match name. -category string Only apply command to snapshots that match category. -environment string Only apply command to snapshots that match environment. -perimeter string Only apply command to snapshots that match perimeter. -job string Only apply command to snapshots that match job. -tag string Only apply command to snapshots that match tag. -concurrency number Set the maximum number of parallel tasks for faster processing. Defaults to 8 * CPU count + 1. -skip-permissions Skip restoring file permissions and ownership during restore, defaulting to 0750 for directories and 0640 for files. It Fl to Ar directory Specify the base directory to which the files will be restored. If omitted, files are restored to the current working directory. -quiet Suppress output to standard input, only logging errors and warnings. EXAMPLES Restore all files from a specific snapshot to the current directory:\n$ plakar restore abc123 Restore to a specific directory:\n$ plakar restore -to /mnt/ abc123 Restore latest snapshot to a specific directory:\n$ plakar restore -latest -to /mnt/ abc123 Restore specific path to a specific directory:\n$ plakar restore -to /mnt/ abc123:/etc/apache2 Restore to a specific destination:\n$ plakar restore -to @s3target abc123 Restore specific path to a specific destination :\n$ plakar restore -to @s3target abc123:/etc/apache2 DIAGNOSTICS The plakar-restore utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as a failure to locate the snapshot or a destination directory issue. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-restore/","section":"Docs","summary":"Restore files from a Plakar snapshot","title":"restore","type":"docs"},{"content":" PLAKAR-RM(1) General Commands Manual PLAKAR-RM(1) NAME plakar-rm \u0026#x2014; Remove snapshots from a Plakar repository\nSYNOPSIS plakar rm [-name name] [-category category] [-environment environment] [-perimeter perimeter] [-job job] [-tag tag] [-latest] [-before date] [-since date] [snapshotID ...] DESCRIPTION The plakar rm command deletes snapshots from a Plakar repository. Snapshots can be filtered for deletion by age, by tag, or by specifying the snapshot IDs to remove. If no snapshotID are provided, either -older or -tag must be specified to filter the snapshots to delete.\nThe arguments are as follows:\n-name name Filter snapshots that match name. -category category Filter snapshots that match category. -environment environment Filter snapshots that match environment. -perimeter perimeter Filter snapshots that match perimeter. -job job Filter snapshots that match job. -tag tag Filter snapshots that match tag. -latest Filter latest snapshot matching filters. -before date Filter snapshots matching filters and older than the specified date. Accepted formats include relative durations (e.g. 2d for two days, 1w for one week) or specific dates in various formats (e.g. 2006-01-02 15:04:05). -since date Filter snapshots matching filters and created since the specified date, included. Accepted formats include relative durations (e.g. 2d for two days, 1w for one week) or specific dates in various formats (e.g. 2006-01-02 15:04:05). EXAMPLES Remove a specific snapshot by ID:\n$ plakar rm abc123 Remove snapshots older than 30 days:\n$ plakar rm -before 30d Remove snapshots with a specific tag:\n$ plakar rm -tag daily-backup Remove snapshots older than 1 year with a specific tag:\n$ plakar rm -before 1y -tag daily-backup DIAGNOSTICS The plakar-rm utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid date format or failure to delete a snapshot. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-rm/","section":"Docs","summary":"Remove snapshots from a Plakar repository","title":"rm","type":"docs"},{"content":" PLAKAR-SCHEDULER(1) General Commands Manual PLAKAR-SCHEDULER(1) NAME plakar-scheduler \u0026#x2014; Run the Plakar scheduler\nSYNOPSIS plakar scheduler [-foreground] [start -tasks configfile] [stop] DESCRIPTION The plakar scheduler runs in the background and manages task execution based on the defined schedule.\nThe options are as follows:\n-foreground Run the scheduler in the foreground instead of as a background service. -tasks configfile Specify the configuration file that contains the task definitions and schedules. start -tasks configfile Starts the scheduler service and its tasks from configfile. stop Stop the currently running scheduler service. DIAGNOSTICS The plakar-scheduler utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid parameters, inability to create the repository, or configuration issues. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-scheduler/","section":"Docs","summary":"Run the Plakar scheduler","title":"scheduler","type":"docs"},{"content":" PLAKAR-SERVER(1) General Commands Manual PLAKAR-SERVER(1) NAME plakar-server \u0026#x2014; Start a Plakar server\nSYNOPSIS plakar server [-allow-delete] [-listen [host]:port] DESCRIPTION The plakar server command starts a Plakar server instance at the provided address, allowing remote interaction with a Kloset store over a network.\nThe options are as follows:\n-allow-delete Enable delete operations. By default, delete operations are disabled to prevent accidental data loss. -listen [host]:port The host and port where to listen to, separated by a colon. The host name is optional, and defaults to all available addresses. If -listen is not provided, the server defaults to listen on localhost at port 9876. EXAMPLES Start a plakar server on the local store:\n$ plakar server Start a plakar server on a remote store:\n$ plakar at sftp://example.org server Start a server on a specific address and port:\n$ plakar server -listen 127.0.0.1:12345 DIAGNOSTICS The plakar-server utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nCAVEATS When a host name is provided, plakar server uses only one of the IP addresses it resolves to, preferably IPv4 .\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-server/","section":"Docs","summary":"Start a Plakar server","title":"server","type":"docs"},{"content":" PLAKAR-SERVICE(1) General Commands Manual PLAKAR-SERVICE(1) NAME plakar-service \u0026#x2014; Manage optional Plakar-connected services\nSYNOPSIS plakar service list plakar service add name [key=value ...] plakar service rm name plakar service status name plakar service show name plakar service enable name plakar service disable name plakar service set name [key=value ...] plakar service unset name [key ...] DESCRIPTION The plakar service command allows you to enable, disable, and inspect additional services that integrate with the plakar platform via plakar-login(1) authentication. These services connect to the plakar.io infrastructure, and should only be enabled if you agree to transmit non-sensitive operational data to plakar.io.\nAll subcommands require prior authentication via plakar-login(1).\nServices are managed by the backend and discovered at runtime. For example, when the \u0026#x201C;alerting\u0026#x201D; service is enable, it will:\nSend email notifications when operations fail. Expose the latest alerting reports in the Plakar UI (see plakar-ui(1)). By default, all services are disabled.\nSUBCOMMANDS list Display the list of available services. add name [key=value ...] Set the configuration for the service identified by name and enable it. The configuration is defined by the given set of key/value pairs. The existing configuration, if any, is discarded. rm name Disable the service identified by name and discard its configuration. status name Display the current status (enabled or disabled) of the named service. show name Display the configuration for the specified service. enable name Enable the specified service. disable name Disable the specified service. set name [key=value ...] Set the configuration key to value for the service identified by name. Multiple key/value pairs can be specified. unset name [key ...] Unset the configuration key for the service identified by name. Multiple keys can be specified. EXAMPLES Check the status of the alerting service:\n$ plakar service status alerting Enable alerting:\n$ plakar service enable alerting Disable alerting:\n$ plakar service disable alerting SEE ALSO plakar-login(1), plakar-ui(1)\nAugust 7, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-service/","section":"Docs","summary":"Manage optional Plakar-connected services","title":"service","type":"docs"},{"content":" PLAKAR-SOURCE(1) General Commands Manual PLAKAR-SOURCE(1) NAME plakar-source \u0026#x2014; Manage Plakar backup source configuration\nSYNOPSIS plakar source subcommand ... DESCRIPTION The plakar source command manages the configuration of data sources for Plakar to backup.\nThe configuration consists in a set of named entries, each of them describing a source for a backup operation.\nA source is defined by at least a location, specifying the importer to use, and some importer-specific parameters.\nThe subcommands are as follows:\nadd name location [option=value ...] Create a new source entry identified by name with the specified location describing the importer to use. Additional importer options can be set by adding option=value parameters. check name Check wether the importer for the source identified by name is properly configured. import [-config location] [-overwrite] [-rclone] [sections ...] Import source configurations from various sources including files, piped input, or rclone configurations. By default, reads from stdin, allowing for piped input from other commands.\nThe -config option specifies a file or URL to read the configuration from.\nThe -overwrite option allows overwriting existing source configurations with the same names.\nThe -rclone option treats the input as an rclone configuration, useful for importing rclone remotes as Plakar sources.\nSpecific sections can be imported by listing their names.\nSections can be renamed during import by appending :newname.\nFor detailed examples and usage patterns, see the https://plakar.io/docs/v1.0.6/guides/importing-configurations/ Importing Configurations guide.\nping name Try to open the data source identified by name to make sure it is reachable. rm name Remove the source identified by name from the configuration. set name [option=value ...] Set the option to value for the source identified by name. Multiple option/value pairs can be specified. show [-secrets] [name ...] Display the current sources configuration. If -secrets is specified, sensitive information such as passwords or tokens will be shown. unset name [option ...] Remove the option for the source entry identified by name. EXIT STATUS The plakar-source utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nSeptember 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-source/","section":"Docs","summary":"Manage Plakar backup source configuration","title":"source","type":"docs"},{"content":" PLAKAR-STORE(1) General Commands Manual PLAKAR-STORE(1) NAME plakar-store \u0026#x2014; Manage Plakar store configurations\nSYNOPSIS plakar store subcommand ... DESCRIPTION The plakar store command manages the Plakar store configurations.\nThe configuration consists in a set of named entries, each of them describing a Plakar store holding backups.\nA store is defined by at least a location, specifying the storage implementation to use, and some storage-specific parameters.\nThe subcommands are as follows:\nadd name location [option=value ...] Create a new store entry identified by name with the specified location. Specific additional configuration parameters can be set by adding option=value parameters. check name Check wether the store identified by name is properly configured. import [-config location] [-overwrite] [-rclone] [sections ...] Import store configurations from various sources including files, piped input, or rclone configurations. By default, reads from stdin, allowing for piped input from other commands.\nThe -config option specifies a file or URL to read the configuration from.\nThe -overwrite option allows overwriting existing store configurations with the same names.\nThe -rclone option treats the input as an rclone configuration, useful for importing rclone remotes as Plakar stores.\nSpecific sections can be imported by listing their names.\nSections can be renamed during import by appending :newname.\nFor detailed examples and usage patterns, see the https://plakar.io/docs/v1.0.6/guides/importing-configurations/ Importing Configurations guide.\nping name Try to connect to the store identified by name to make sure it is reachable. rm name Remove the store identified by name from the configuration. set name [option=value ...] Set the option to value for the store identified by name. Multiple option/value pairs can be specified. show [-secrets] [name ...] Display the current stores configuration. If -secrets is specified, sensitive information such as passwords or tokens will be shown. unset name [option ...] Remove the option for the store entry identified by name. DIAGNOSTICS The plakar-store utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nSeptember 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-store/","section":"Docs","summary":"Manage Plakar store configurations","title":"store","type":"docs"},{"content":" PLAKAR-SYNC(1) General Commands Manual PLAKAR-SYNC(1) NAME plakar-sync \u0026#x2014; Synchronize snapshots between Plakar repositories\nSYNOPSIS plakar sync [-packfiles path] [snapshotID] to | from | with repository DESCRIPTION The plakar sync command synchronize snapshots between two Plakar repositories. If a specific snapshot ID is provided, only snapshots with matching IDs will be synchronized.\nplakar sync supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-packfiles path Path where to put the temporary packfiles instead of building them in memory. If the special value \u0026#x2018;memory\u0026#x2019; is specified then the packfiles are build in memory (the default value) The arguments are as follows:\nto | from | with Specifies the direction of synchronization: to Synchronize snapshots from the local repository to the specified peer repository. from Synchronize snapshots from the specified peer repository to the local repository. with Synchronize snapshots in both directions, ensuring both repositories are fully synchronized. repository Path to the peer repository to synchronize with. EXAMPLES Synchronize the snapshot \u0026#x2018;abcd\u0026#x2019; with a peer repository:\n$ plakar sync abcd to @peer Bi-directional synchronization with peer repository of recent snapshots:\n$ plakar sync -since 7d with @peer Synchronize all snapshots of @peer to @repo:\n$ plakar at @repo sync from @peer DIAGNOSTICS The plakar-sync utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 General failure occurred, such as an invalid repository path, snapshot ID mismatch, or network error. SEE ALSO plakar(1), plakar-query(7)\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-sync/","section":"Docs","summary":"Synchronize snapshots between Plakar repositories","title":"sync","type":"docs"},{"content":" PLAKAR-TOKEN(1) General Commands Manual PLAKAR-TOKEN(1) NAME plakar-token \u0026#x2014; Manage Plakar tokens\nSYNOPSIS plakar token [create] DESCRIPTION The plakar token command manages tokens used to authenticate to Plakar services. Tokens are not currently usable and exist only for future features.\nSUBCOMMANDS create Create a new token. SEE ALSO plakar(1)\nAugust 6, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-token/","section":"Docs","summary":"Manage Plakar tokens","title":"token","type":"docs"},{"content":" PLAKAR-UI(1) General Commands Manual PLAKAR-UI(1) NAME plakar-ui \u0026#x2014; Serve the Plakar web user interface\nSYNOPSIS plakar ui [-addr address] [-cors] [-no-auth] [-no-spawn] DESCRIPTION The plakar ui command serves the Plakar web user interface. By default, it opens the default web browser.\nThe options are as follows:\n-addr address Specify the address and port for the UI to listen on separated by a colon, (e.g. localhost:8080). If omitted, plakar ui listens on localhost on a random port. -cors Set the \u0026#x2018;Access-Control-Allow-Origin\u0026#x2019; HTTP headers to allow the UI to be accessed from any origin. -no-auth Disable the authentication token that otherwise is needed to consume the exposed HTTP APIs. -no-spawn Do not automatically open the web browser. EXAMPLES Using a custom address and disable automatic browser execution:\n$ plakar ui -addr localhost:9090 -no-spawn DIAGNOSTICS The plakar-ui utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 A general error occurred, such as an inability to launch the UI or bind to the specified address. SEE ALSO plakar(1)\nAugust 6, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-ui/","section":"Docs","summary":"Serve the Plakar web user interface","title":"ui","type":"docs"},{"content":" PLAKAR-VERSION(1) General Commands Manual PLAKAR-VERSION(1) NAME plakar-version \u0026#x2014; Display the current Plakar version\nSYNOPSIS plakar version DESCRIPTION The plakar version command displays the current version of Plakar.\nSEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.6/references/commands/plakar-version/","section":"Docs","summary":"Display the current Plakar version","title":"version","type":"docs"},{"content":" PLAKAR-AGENT(1) General Commands Manual PLAKAR-AGENT(1) NAME plakar-agent \u0026#x2014; Run the Plakar agent\nSYNOPSIS plakar agent [start [-foreground] [-log logfile] [-teardown delay]] plakar agent stop DESCRIPTION The plakar agent start command, which is the default, starts the Plakar agent which will execute subsequent plakar(1) commands on their behalfs for faster processing.\nplakar agent is executed automatically by most plakar(1) commands and terminates by itself when idle for too long, so usually there's no need to manually start it.\nThe options for plakar agent start are as follows:\n-foreground Do not daemonize, run in the foreground and log to standard error. -log logfile Write log output to the given logfile which is created if it does not exist. The default is to log to syslog. -teardown delay Specify the delay after which the idle agent terminate. The delay parameter must be given as a sequence of decimal value, each followed by a time unit (e.g. \u0026#x201C;1m30s\u0026#x201D;). Defaults to 5 seconds. plakar agent stop forces the currently running agent to stop. This is useful when upgrading from an older plakar(1) version were the agent was always running.\nDIAGNOSTICS The plakar-agent utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid parameters, inability to create the repository, or configuration issues. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-agent/","section":"Docs","summary":"Run the Plakar agent","title":"agent","type":"docs"},{"content":" PLAKAR-ARCHIVE(1) General Commands Manual PLAKAR-ARCHIVE(1) NAME plakar-archive \u0026#x2014; Create an archive from a Plakar snapshot\nSYNOPSIS plakar archive [-format type] [-output archive] [-rebase] snapshotID:path DESCRIPTION The plakar archive command creates an archive of the given type from the contents at path of a specified Plakar snapshot, or all the files if no path is given.\nThe options are as follows:\n-format type Specify the archive format. Supported formats are: tar Creates a tar file. tarball Creates a compressed tar.gz file. zip Creates a zip archive. -output pathname Specify the output path for the archive file. If omitted, the archive is created with a default name based on the current date and time. -rebase Strip the leading path from archived files, useful for creating \u0026quot;flat\u0026quot; archives without nested directories. EXAMPLES Create a tarball of the entire snapshot:\n$ plakar archive -output backup.tar.gz -format tarball abc123 Create a zip archive of a specific directory within a snapshot:\n$ plakar archive -output dir.zip -format zip abc123:/var/www Archive with rebasing to remove directory structure:\n$ plakar archive -rebase -format tar abc123 DIAGNOSTICS The plakar-archive utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as unsupported format, missing files, or permission issues. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-archive/","section":"Docs","summary":"Create an archive from a Plakar snapshot","title":"archive","type":"docs"},{"content":" PLAKAR-BACKUP(1) General Commands Manual PLAKAR-BACKUP(1) NAME plakar-backup \u0026#x2014; Create a new snapshot in a Kloset store\nSYNOPSIS plakar backup [-concurrency number] [-force-timestamp timestamp] [-ignore pattern] [-ignore-file file] [-check] [-o option] [-packfiles path] [-quiet] [-silent] [-tag tag] [-scan] [place] DESCRIPTION The plakar backup command creates a new snapshot of place, or the current directory. Snapshots can be filtered to ignore specific files or directories based on patterns provided through options.\nplace can be either a path, an URI, or a label with the form \u0026#x201C;@name\u0026#x201D; to reference a source connector configured with plakar-source(1).\nThe options are as follows:\n-concurrency number Set the maximum number of parallel tasks for faster processing. Defaults to 8 * CPU count + 1. -force-timestamp timestamp Specify a fixed timestamp (in ISO 8601 or relative human format) to use for the snapshot. Could be used to reimport an existing backup with the same timestamp. -ignore pattern Specify individual gitignore exclusion patterns to ignore files or directories in the backup. This option can be repeated. -ignore-file file Specify a file containing gitignore exclusion patterns, one per line, to ignore files or directories in the backup. -check Perform a full check on the backup after success. -o option Can be used to pass extra arguments to the source connector. The given option takes precedence over the configuration file. -quiet Suppress output to standard input, only logging errors and warnings. -packfiles path Path where to put the temporary packfiles instead of building them in memory. If the special value \u0026#x2018;memory\u0026#x2019; is specified then the packfiles are build in memory (the default value) -silent Suppress all output. -tag tag Comma-separated list of tags to apply to the snapshot. -scan Do not write a snapshot; instead, perform a dry run by outputting the list of files and directories that would be included in the backup. Respects all exclude patterns and other options, but makes no changes to the Kloset store. EXAMPLES Create a snapshot of the current directory with two tags:\n$ plakar backup -tag daily-backup,production Backup a specific directory with exclusion patterns from a file:\n$ plakar backup -ignore-file ~/my-ignore-file /var/www Backup a directory with specific file exclusions:\n$ plakar backup -ignore \u0026quot;*.tmp\u0026quot; -ignore \u0026quot;*.log\u0026quot; /var/www DIAGNOSTICS The plakar-backup utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully; a snapshot was created, but some items may have been skipped (for example, files without sufficient permissions). Run plakar-info(1) on the new snapshot to view any errors. \u0026gt;0 An error occurred, such as failure to access the Kloset store or issues with exclusion patterns. SEE ALSO plakar(1), plakar-source(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-backup/","section":"Docs","summary":"Create a new snapshot in a Kloset store","title":"backup","type":"docs"},{"content":" PLAKAR-CAT(1) General Commands Manual PLAKAR-CAT(1) NAME plakar-cat \u0026#x2014; Display file contents from a Plakar snapshot\nSYNOPSIS plakar cat [-decompress] [-highlight] snapshotID:path ... DESCRIPTION The plakar cat command outputs the contents of path within Plakar snapshots to the standard output. It can decompress compressed files and optionally apply syntax highlighting based on the file type.\nThe options are as follows:\n-decompress If set, Plakar attempts to decompress application/gzip files. -highlight Apply syntax highlighting to the output based on the file type. EXAMPLES Display a file's contents from a snapshot:\n$ plakar cat abc123:/etc/passwd Display a file with syntax highlighting:\n$ plakar cat -highlight abc123:/home/op/korpus/driver.sh DIAGNOSTICS The plakar-cat utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as failure to retrieve a file or decompress content. SEE ALSO plakar(1), plakar-backup(1)\nAugust 6, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-cat/","section":"Docs","summary":"Display file contents from a Plakar snapshot","title":"cat","type":"docs"},{"content":" PLAKAR-CHECK(1) General Commands Manual PLAKAR-CHECK(1) NAME plakar-check \u0026#x2014; Check data integrity in a Plakar repository\nSYNOPSIS plakar check [-concurrency number] [-fast] [-no-verify] [-quiet] [snapshotID:path ...] DESCRIPTION The plakar check command verifies the integrity of data in a Plakar repository. It checks the given paths inside the snapshots for consistency and validates file macs to ensure no corruption has occurred, or all the data in the repository if no snapshotID or location flags is given.\nIn addition to the flags described below, plakar check supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-concurrency number Set the maximum number of parallel tasks for faster processing. Defaults to 8 * CPU count + 1. -fast Enable a faster check that skips mac verification. This option performs only structural validation without confirming data integrity. -no-verify Disable signature verification. This option allows to proceed with checking snapshot integrity regardless of an invalid snapshot signature. -quiet Suppress output to standard output, only logging errors and warnings. EXAMPLES Perform a full integrity check on all snapshots:\n$ plakar check Perform a fast check on specific paths of two snapshot:\n$ plakar check -fast abc123:/etc/passwd def456:/var/www DIAGNOSTICS The plakar-check utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully with no integrity issues found. \u0026gt;0 An error occurred, such as corruption detected in a snapshot or failure to check data integrity. SEE ALSO plakar(1), plakar-query(7)\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-check/","section":"Docs","summary":"Check data integrity in a Plakar repository","title":"check","type":"docs"},{"content":" PLAKAR-CLONE(1) General Commands Manual PLAKAR-CLONE(1) NAME plakar-clone \u0026#x2014; Clone a Plakar repository to a new location\nSYNOPSIS plakar clone to path DESCRIPTION The plakar clone command creates a full clone of an existing Plakar repository, including all snapshots, packfiles, and repository states, and saves it at the specified path.\nEXAMPLES Clone a repository to a new location:\nplakar clone to /path/to/new/repository DIAGNOSTICS The plakar-clone utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as failure to access the source repository or to create the target repository. SEE ALSO plakar(1), plakar-create(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-clone/","section":"Docs","summary":"Clone a Plakar repository to a new location","title":"clone","type":"docs"},{"content":" PLAKAR-CREATE(1) General Commands Manual PLAKAR-CREATE(1) NAME plakar-create \u0026#x2014; Create a new Plakar repository\nSYNOPSIS plakar create [-plaintext] DESCRIPTION The plakar create command creates a new Plakar repository at the specified path which defaults to ~/.plakar.\nThe options are as follows:\n-plaintext Disable transparent encryption for the repository. If specified, the repository will not use encryption. ENVIRONMENT PLAKAR_PASSPHRASE Repository encryption password. DIAGNOSTICS The plakar-create utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid parameters, inability to create the repository, or configuration issues. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-create/","section":"Docs","summary":"Create a new Plakar repository","title":"create","type":"docs"},{"content":" PLAKAR-DESTINATION(1) General Commands Manual PLAKAR-DESTINATION(1) NAME plakar-destination \u0026#x2014; Manage Plakar restore destination configuration\nSYNOPSIS plakar destination subcommand ... DESCRIPTION The plakar destination command manages the configuration of destinations where Plakar will restore.\nThe configuration consists in a set of named entries, each of them describing a destination where a restore operation may happen.\nA destination is defined by at least a location, specifying the exporter to use, and some exporter-specific parameters.\nThe subcommands are as follows:\nadd name location [option=value ...] Create a new destination entry identified by name with the specified location describing the exporter to use. Additional exporter options can be set by adding option=value parameters. check name Check wether the exporter for the destination identified by name is properly configured. import [-config location] [-overwrite] [-rclone] [sections ...] Import destination configurations from various sources including files, piped input, or rclone configurations. By default, reads from stdin, allowing for piped input from other commands like plakar source show.\nThe -config option specifies a file or URL to read the configuration from.\nThe -overwrite option allows overwriting existing destination configurations with the same names.\nThe -rclone option treats the input as an rclone configuration, useful for importing rclone remotes as Plakar destinations.\nSpecific sections can be imported by listing their names.\nSections can be renamed during import by appending :newname.\nFor detailed examples and usage patterns, see the https://plakar.io/docs/v1.0.5/guides/importing-configurations/ Importing Configurations guide.\nping name Try to open the destination identified by name to make sure it is reachable. rm name Remove the destination identified by name from the configuration. set name [option=value ...] Set the option to value for the destination identified by name. Multiple option/value pairs can be specified. show [-secrets] [name ...] Display the current destinations configuration. If -secrets is specified, sensitive information such as passwords or tokens will be shown. unset name [option ...] Remove the option for the destination entry identified by name. EXIT STATUS The plakar-destination utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nSeptember 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-destination/","section":"Docs","summary":"Manage Plakar restore destination configuration","title":"destination","type":"docs"},{"content":" PLAKAR-DIAG(1) General Commands Manual PLAKAR-DIAG(1) NAME plakar-diag \u0026#x2014; Display detailed information about Plakar internal structures\nSYNOPSIS plakar diag [contenttype | errors | locks | object | packfile | snapshot | state | vfs | xattr] DESCRIPTION The plakar diag command provides detailed information about various internal data structures. The type of information displayed depends on the specified argument. Without any arguments, display information about the repository.\nThe sub-commands are as follows:\ncontenttype snapshotID:path \u0026#x00A0; errors snapshotID Display the list of errors in the given snapshot. locks Display the list of locks currently held on the repository. object objectID Display information about a specific object, including its mac, type, tags, and associated data chunks. packfile packfileID Show details of packfiles, including entries and macs, which store object data within the repository. snapshot snapshotID Show detailed information about a specific snapshot, including its metadata, directory and file count, and size. state List or describe the states in the repository. vfs snapshotID:path Show filesystem (VFS) details for a specific path within a snapshot, listing directory or file attributes, including permissions, ownership, and custom metadata. xattr snapshotID:path \u0026#x00A0; EXAMPLES Show repository information:\n$ plakar diag Show detailed information for a snapshot:\n$ plakar diag snapshot abc123 List all states in the repository:\n$ plakar diag state Display a specific object within a snapshot:\n$ plakar diag object 1234567890abcdef Display filesystem details for a path within a snapshot:\n$ plakar diag vfs abc123:/etc/passwd DIAGNOSTICS The plakar-diag utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as an invalid snapshot or object ID, or a failure to retrieve the requested data. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-diag/","section":"Docs","summary":"Display detailed information about Plakar internal structures","title":"diag","type":"docs"},{"content":" PLAKAR-DIFF(1) General Commands Manual PLAKAR-DIFF(1) NAME plakar-diff \u0026#x2014; Show differences between files in a Plakar snapshots\nSYNOPSIS plakar diff [-highlight] [-recursive] snapshotID1[:path1] snapshotID2[:path2] DESCRIPTION The plakar diff command compares two Plakar snapshots, optionally restricting to specific files within them. If only snapshot IDs are provided, it compares the root directories of each snapshot. If file paths are specified, the command compares the individual files. The diff output is shown in unified diff format, with an option to highlight differences.\nThe options are as follows:\n-highlight Apply syntax highlighting to the diff output for readability. -recursive When comparing directories, recursively compare all subdirectories. EXAMPLES Compare root directories of two snapshots:\n$ plakar diff abc123 def456 Compare across snapshots with highlighting: /etc/passwd\n$ plakar diff -highlight abc123:/etc/passwd def456:/etc/passwd DIAGNOSTICS The plakar-diff utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid snapshot IDs, missing files, or an unsupported file type. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-diff/","section":"Docs","summary":"Show differences between files in a Plakar snapshots","title":"diff","type":"docs"},{"content":" PLAKAR-DIGEST(1) General Commands Manual PLAKAR-DIGEST(1) NAME plakar-digest \u0026#x2014; Compute digests for files in a Plakar snapshot\nSYNOPSIS plakar digest [-hashing algorithm] snapshotID[:path] [...] DESCRIPTION The plakar digest command computes and displays digests for specified path in a the given snapshotID. Multiple snapshotID and path may be given. By default, the command computes the digest by reading the file contents.\nThe options are as follows:\n-hashing algorithm Use algorithm to compute the digest. Defaults to SHA256. EXAMPLES Compute the digest of a file within a snapshot:\n$ plakar digest abc123:/etc/passwd Use BLAKE3 as the digest algorithm:\n$ plakar digest -hashing BLAKE3 abc123:/etc/netstart DIAGNOSTICS The plakar-digest utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as failure to retrieve a file digest or invalid snapshot ID. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-digest/","section":"Docs","summary":"Compute digests for files in a Plakar snapshot","title":"digest","type":"docs"},{"content":" PLAKAR-DUP(1) General Commands Manual PLAKAR-DUP(1) NAME plakar-dup \u0026#x2014; Duplicates an existing snapshot with a different ID\nSYNOPSIS plakar dup DESCRIPTION The plakar dup command creates a duplicate of an existing snapshot with a new snapshot ID. The new snapshot is an exact copy of the original, including all files and metadata.\nEXAMPLES Create a duplicate of a snapshot with ID \u0026quot;abc123\u0026quot;:\n$ plakar dup abc123 DIAGNOSTICS The plakar-dup utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as failure to retrieve existing snapshot or invalid snapshot ID. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-dup/","section":"Docs","summary":"Duplicates an existing snapshot with a different ID","title":"dup","type":"docs"},{"content":" PLAKAR-INFO(1) General Commands Manual PLAKAR-INFO(1) NAME plakar-info \u0026#x2014; Display detailed information about internal structures\nSYNOPSIS plakar info [-errors] [snapshot] DESCRIPTION The plakar info command provides detailed information about a Plakar repository and snapshots. The type of information displayed depends on the specified argument. Without any arguments, display information about the repository.\nThe options are as follows:\n-errors Show errors within the specified snapshot. EXAMPLES Show repository information:\n$ plakar info Show detailed information for a snapshot:\n$ plakar info abc123 Show errors within a snapshot:\n$ plakar info -errors abc123 DIAGNOSTICS The plakar-info utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as an invalid snapshot or object ID, or a failure to retrieve the requested data. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-info/","section":"Docs","summary":"Display detailed information about internal structures","title":"info","type":"docs"},{"content":" PLAKAR-LOCATE(1) General Commands Manual PLAKAR-LOCATE(1) NAME plakar-locate \u0026#x2014; Find filenames in a Plakar snapshot\nSYNOPSIS plakar locate [-snapshot snapshotID] patterns ... DESCRIPTION The plakar locate command search snapshots to find file names matching any of the given patterns and prints the abbreviated snapshot ID and the full path of the matched files. Matching works according to the shell globbing rules.\nIf no -snapshot nor location flags are given, plakar locate will search in all snapshots.\nIn addition to the flags described below, plakar locate supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-snapshot snapshotID Limit the search to the given snapshot. EXAMPLES Search for files ending in \u0026#x201C;wd\u0026#x201D;:\n$ plakar locate '*wd' abc123:/etc/master.passwd abc123:/etc/passwd DIAGNOSTICS The plakar-locate utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid parameters, inability to create the repository, or configuration issues. SEE ALSO plakar(1), plakar-backup(1), plakar-query(7)\nCAVEATS The patterns may have to be quoted to avoid the shell attempting to expand them.\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-locate/","section":"Docs","summary":"Find filenames in a Plakar snapshot","title":"locate","type":"docs"},{"content":" PLAKAR-LOGIN(1) General Commands Manual PLAKAR-LOGIN(1) NAME plakar-login \u0026#x2014; Authenticate to Plakar services\nSYNOPSIS plakar login [-email email] [-github] [-no-spawn] [-status] DESCRIPTION The plakar login command initiates an authentication flow with the Plakar platform. Login is optional for most plakar commands but required to enable certain services, such as alerting. See also plakar-service(1).\nOnly one authentication method may be specified per invocation: the -email and -github options are mutually exclusive. If neither is provided, -github is assumed.\nThe options are as follows:\n-email email Send a login link to the specified email address. Clicking the link in the received email will authenticate plakar. -github Use GitHub OAuth to authenticate. A browser will be spawned to initiate the OAuth flow unless -no-spawn is specified. -no-spawn Do not automatically open a browser window for authentication flows. -status Check wether the user is currently logged in. This option cannot be used with any other options. EXAMPLES Start a login via email:\n$ plakar login -email user@example.com Authenticate via GitHub (default, opens browser):\n$ plakar login SEE ALSO plakar(1), plakar-logout(1), plakar-service(1)\nJuly 8, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-login/","section":"Docs","summary":"Authenticate to Plakar services","title":"login","type":"docs"},{"content":" PLAKAR-LOGOUT(1) General Commands Manual PLAKAR-LOGOUT(1) NAME plakar-logout \u0026#x2014; Log out from Plakar services\nSYNOPSIS plakar logout DESCRIPTION The plakar logout command logs out an authenticated session with the Plakar platform.\nEXAMPLES Log out from the current session:\n$ plakar logout SEE ALSO plakar(1), plakar-login(1), plakar-service(1)\nJuly 8, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-logout/","section":"Docs","summary":"Log out from Plakar services","title":"logout","type":"docs"},{"content":" PLAKAR-LS(1) General Commands Manual PLAKAR-LS(1) NAME plakar-ls \u0026#x2014; List snapshots and their contents in a Plakar repository\nSYNOPSIS plakar ls [-uuid] [-recursive] [snapshotID:path] DESCRIPTION The plakar ls command lists snapshots stored in a Plakar repository, and optionally displays the contents of path in a specified snapshot.\nIn addition to the flags described below, plakar ls supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-uuid Display the full UUID for each snapshot instead of the shorter snapshot ID. -recursive List directory contents recursively when exploring snapshot contents. EXAMPLES List all snapshots with their short IDs:\n$ plakar ls List all snapshots with UUIDs instead of short IDs:\n$ plakar ls -uuid List snapshots with a specific tag:\n$ plakar ls -tag daily-backup List contents of a specific snapshot:\n$ plakar ls abc123 Recursively list contents of a specific snapshot:\n$ plakar ls -recursive abc123:/etc DIAGNOSTICS The plakar-ls utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as failure to retrieve snapshot information or invalid snapshot ID. SEE ALSO plakar(1), plakar-query(7)\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-ls/","section":"Docs","summary":"List snapshots and their contents in a Plakar repository","title":"ls","type":"docs"},{"content":" PLAKAR-MAINTENANCE(1) General Commands Manual PLAKAR-MAINTENANCE(1) NAME plakar-maintenance \u0026#x2014; Remove unused data from a Plakar repository\nSYNOPSIS plakar maintenance DESCRIPTION The plakar maintenance command removes unused blobs, objects, and chunks from a Plakar repository to reduce storage space. It identifies unreferenced data and reorganizes packfiles to ensure only active snapshots and their dependencies are retained. The maintenance process updates snapshot indexes to reflect these changes.\nDIAGNOSTICS The plakar-maintenance utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred during maintenance, such as failure to update indexes or remove data. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-maintenance/","section":"Docs","summary":"Remove unused data from a Plakar repository","title":"maintenance","type":"docs"},{"content":" PLAKAR-MOUNT(1) General Commands Manual PLAKAR-MOUNT(1) NAME plakar-mount \u0026#x2014; Mount Plakar snapshots as read-only filesystem\nSYNOPSIS plakar mount mountpoint DESCRIPTION The plakar mount command mounts a Plakar repository snapshot as a read-only filesystem at the specified mountpoint. This allows users to access snapshot contents as if they were part of the local file system, providing easy browsing and retrieval of files without needing to explicitly restore them. This command may not work on all Operating Systems.\nEXAMPLES Mount a snapshot to the specified directory:\n$ plakar mount ~/mnt DIAGNOSTICS The plakar-mount utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as an invalid mountpoint or failure during the mounting process. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-mount/","section":"Docs","summary":"Mount Plakar snapshots as read-only filesystem","title":"mount","type":"docs"},{"content":" PLAKAR-PKG-ADD(1) General Commands Manual PLAKAR-PKG-ADD(1) NAME plakar-pkg-add \u0026#x2014; Install Plakar plugins\nSYNOPSIS plakar pkg add plugin ... DESCRIPTION The plakar pkg add command adds a local or a remote plugin.\nIf plugin is an absolute path, or if it starts with \u0026#x2018;./\u0026#x2019;, then it is considered a path to a local plugin file, otherwise it is downloaded from the Plakar plugin server. In the latter case, the user must be logged in via the plakar-login(1) command.\nFILES ~/.cache/plakar/plugins/ Plugin cache directory. Respects XDG_CACHE_HOME if set. ~/.local/share/plakar/plugins Plugin directory. Respects XDG_DATA_HOME if set. SEE ALSO plakar-login(1), plakar-pkg-build(1), plakar-pkg-create(1), plakar-pkg-rm(1), plakar-pkg-show(1)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-pkg-add/","section":"Docs","summary":"Install Plakar plugins","title":"pkg-add","type":"docs"},{"content":" PLAKAR-PKG-BUILD(1) General Commands Manual PLAKAR-PKG-BUILD(1) NAME plakar-pkg-build \u0026#x2014; Build Plakar plugins from source\nSYNOPSIS plakar pkg build recipe.yaml DESCRIPTION The plakar pkg build fetches the sources and builds the plugin as specified in the given plakar-pkg-recipe.yaml(5). If it builds successfully, the resulting plugin will be created in the current working directory.\nFILES ~/.cache/plakar/plugins/ Plugin cache directory. Respects XDG_CACHE_HOME if set. ~/.local/share/plakar/plugins Plugin directory. Respects XDG_DATA_HOME if set. SEE ALSO plakar-pkg-add(1), plakar-pkg-create(1), plakar-pkg-rm(1), plakar-pkg-show(1), plakar-pkg-recipe.yaml(5)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-pkg-build/","section":"Docs","summary":"Build Plakar plugins from source","title":"pkg-build","type":"docs"},{"content":" PLAKAR-PKG-CREATE(1) General Commands Manual PLAKAR-PKG-CREATE(1) NAME plakar-pkg-create \u0026#x2014; Package a plugin\nSYNOPSIS plakar pkg build manifest.yaml DESCRIPTION The plakar pkg create assembles a plugin using the provided plakar-pkg-manifest.yaml(5).\nAll the files needed for the plugin need to be already available, i.e. executables must be already be built.\nAll external files must reside in the same directory as the manifest.yaml or in subdirectories.\nSEE ALSO plakar-pkg-add(1), plakar-pkg-build(1), plakar-pkg-rm(1), plakar-pkg-show(1), plakar-pkg-manifest.yaml(5)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-pkg-create/","section":"Docs","summary":"Package a plugin","title":"pkg-create","type":"docs"},{"content":" PLAKAR-PKG-MANIFEST.YAML(5) File Formats Manual PLAKAR-PKG-MANIFEST.YAML(5) NAME manifest.yaml \u0026#x2014; Manifest for plugin assemblation\nDESCRIPTION The manifest.yaml file format describes how to package a plugin. No build or compilation is done, so all executables and other files must be prepared beforehand.\nmanifest.yaml must have a top-level YAML object with the following fields:\nname The name of the plugins display_name The displayed name in the UI. description A short description of the connectors. homepage A link to the homepage. license The license of the connectors. tag A YAML array of strings for tags that describe the connectors. api_version The API version supported. version The plugin version, which doubles as the git tag as well. It must follow semantic versioning and have a \u0026#x2018;v\u0026#x2019; prefix, e.g. \u0026#x2018;v1.2.3\u0026#x2019;. connectors A YAML array of objects with the following properties: type The connector type, one of importer, exporter, or store. protocols An array of YAML strings containing all the protocols that the connector supports. location_flags An optional array of YAML strings describing some properties of the connector. These properties are: localfs Whether paths given to this connector have to be made absolute. file Whether this store backend handles a Kloset in a sigle file, for e.g. a ptar file. executable Path to the plugin executable. extra_file An optional array of YAML string. These are extra files that need to be included in the package. EXAMPLES A sample manifest for the \u0026#x201C;fs\u0026#x201D; plugin is as follows:\n# manifest.yaml name: fs display_name: file system connector description: file storage but as external plugin homepage: https://github.com/PlakarKorp/integration-fs license: ISC tags: [ fs, filesystem, \u0026quot;local files\u0026quot; ] api_version: 1.0.0 version: 1.0.0 connectors: - type: importer executable: fs-importer protocols: [fs] - type: exporter executable: fs-exporter protocols: [fs] - type: storage executable: fs-store protocols: [fs] SEE ALSO plakar-pkg-create(1)\nJuly 20, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-pkg-manifest.yaml/","section":"Docs","summary":"Manifest for plugin assemblation","title":"pkg-manifest.yaml","type":"docs"},{"content":" PLAKAR-PKG-RECIPE.YAML(5) File Formats Manual PLAKAR-PKG-RECIPE.YAML(5) NAME recipe.yaml \u0026#x2014; Recipe to build Plakar plugins from source\nDESCRIPTION The recipe.yaml file format describes how to fetch and build Plakar plugins. It must have a top-level YAML object with the following fields:\nname The name of the plugins version The plugin version, which doubles as the git tag as well. It must follow semantic versioning and have a \u0026#x2018;v\u0026#x2019; prefix, e.g. \u0026#x2018;v1.2.3\u0026#x2019;. repository URL to the git repository holding the plugin. EXAMPLES A sample recipe to build the \u0026#x201C;fs\u0026#x201D; plugin is as follows:\n# recipe.yaml name: fs version: v1.0.0 repository: https://github.com/PlakarKorp/integrations-fs SEE ALSO plakar-pkg-build(1)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-pkg-recipe.yaml/","section":"Docs","summary":"Recipe to build Plakar plugins from source","title":"pkg-recipe.yaml","type":"docs"},{"content":" PLAKAR-PKG-RM(1) General Commands Manual PLAKAR-PKG-RM(1) NAME plakar-pkg-rm \u0026#x2014; Uninstall Plakar plugins\nSYNOPSIS plakar pkg rm plugin ... DESCRIPTION The plakar pkg rm command removes plugins that have been previously installed with plakar-pkg-add(1) command.\nThe list of plugins can be obtained with plakar-pkg-show(1).\nEXAMPLES Removing a plugin:\n$ plakar pkg show epic-v1.2.3 $ plakar pkg rm epic-v1.2.3 SEE ALSO plakar-pkg-add(1), plakar-pkg-build(1), plakar-pkg-create(1), plakar-pkg-show(1)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-pkg-rm/","section":"Docs","summary":"Uninstall Plakar plugins","title":"pkg-rm","type":"docs"},{"content":" PLAKAR-PKG-SHOW(1) General Commands Manual PLAKAR-PKG-SHOW(1) NAME plakar-pkg-show \u0026#x2014; Show installed Plakar plugins\nSYNOPSIS plakar pkg show [-available] [-long] DESCRIPTION The plakar pkg show command shows the currently installed plugins.\nThe options are as follows:\n-available Instead of installed packages, show the set of prebuilt packages available for this system. -long Show the full package name. FILES ~/.cache/plakar/plugins/ Plugin cache directory. Respects XDG_CACHE_HOME if set. ~/.local/share/plakar/plugins Plugin directory. Respects XDG_DATA_HOME if set. SEE ALSO plakar-pkg-add(1), plakar-pkg-build(1), plakar-pkg-create(1), plakar-pkg-rm(1)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-pkg-show/","section":"Docs","summary":"Show installed Plakar plugins","title":"pkg-show","type":"docs"},{"content":" PLAKAR(1) General Commands Manual PLAKAR(1) NAME plakar \u0026#x2014; effortless backups\nSYNOPSIS plakar [-config path] [-cpu number] [-keyfile path] [-no-agent] [-quiet] [-trace subsystems] [at kloset] subcommand ... DESCRIPTION plakar is a tool to create distributed, versioned backups with compression, encryption, and data deduplication.\nBy default, plakar operates on the Kloset store at ~/.plakar. This can be changed either by using the at option.\nThe following options are available:\n-config path Use the configuration at path. -cpu number Limit the number of parallel workers plakar uses to number. By default it's the number of online CPUs. -keyfile path Read the passphrase from the key file at path instead of prompting. Overrides the PLAKAR_PASSPHRASE environment variable. -no-agent Run without attempting to connect to the agent. -quiet Disable all output except for errors. -trace subsystems Display trace logs. subsystems is a comma-separated series of keywords to enable the trace logs for different subsystems: all, trace, repository, snapshot and server. at kloset Operates on the given kloset store. It could be a path, an URI, or a label in the form \u0026#x201C;@name\u0026#x201D; to reference a configuration created with plakar-store(1). The following commands are available:\nagent Run the plakar agent and configure scheduled tasks, documented in plakar-agent(1). archive Create an archive from a Kloset snapshot, documented in plakar-archive(1). backup Create a new Kloset snapshot, documented in plakar-backup(1). cat Display file contents from a Kloset snapshot, documented in plakar-cat(1). check Check data integrity in a Kloset store, documented in plakar-check(1). clone Clone a Kloset store to a new location, documented in plakar-clone(1). create Create a new Kloset store, documented in plakar-create(1). destination Manage configurations for the destination connectors, documented in plakar-destination(1). diff Show differences between files in a Kloset snapshot, documented in plakar-diff(1). digest Compute digests for files in a Kloset snapshot, documented in plakar-digest(1). help Show this manpage and the ones for the subcommands. info Display detailed information about internal structures, documented in plakar-info(1). locate Find filenames in a Kloset snapshot, documented in plakar-locate(1). ls List snapshots and their contents in a Kloset store, documented in plakar-ls(1). maintenance Remove unused data from a Kloset store, documented in plakar-maintenance(1). mount Mount Kloset snapshots as a read-only filesystem, documented in plakar-mount(1). ptar Create a .ptar archive, documented in plakar-ptar(1). pkg show List installed plugins, documented in plakar-pkg-show(1). pkg add Install a plugin, documented in plakar-pkg-add(1). pkg build Build a plugin from source, documented in plakar-pkg-build(1). pkg create Package a plugin, documented in plakar-pkg-create(1). pkg rm Uninstall a plugin, documented in plakar-pkg-rm(1). restore Restore files from a Kloset snapshot, documented in plakar-restore(1). rm Remove snapshots from a Kloset store, documented in plakar-rm(1). server Start a Plakar server, documented in plakar-server(1). source Manage configurations for the source connectors, documented in plakar-source(1). store Manage configurations for storage connectors, documented in plakar-store(1). sync Synchronize snapshots between Kloset stores, documented in plakar-sync(1). ui Serve the Plakar web user interface, documented in plakar-ui(1). version Display the current Plakar version, documented in plakar-version(1). ENVIRONMENT PLAKAR_PASSPHRASE Passphrase to unlock the Kloset store; overrides the one from the configuration. If set, plakar won't prompt to unlock. The option keyfile overrides this environment variable. PLAKAR_REPOSITORY Reference to the Kloset store. FILES ~/.cache/plakar and ~/.cache/plakar-agentless Plakar cache directories. ~/.config/plakar/destinations.yml Restore destinations configuration. ~/.config/plakar/sources.yml Backup sources configuration. ~/.config/plakar/stores.yml Kloset stores configuration. ~/.plakar Default Kloset store location. EXAMPLES Create an encrypted Kloset store at the default location:\n$ plakar create Create an encrypted Kloset store on AWS S3:\n$ plakar store add mys3bucket \\ location=s3://s3.eu-west-3.amazonaws.com/backups \\ access_key=\u0026quot;access_key\u0026quot; \\ secret_access_key=\u0026quot;secret_key\u0026quot; $ plakar at @mys3bucket create Create a snapshot of the current directory on the @mys3bucket Kloset store:\n$ plakar at @mys3bucket backup List the snapshots of the default Kloset store:\n$ plakar ls Restore the file \u0026#x201C;notes.md\u0026#x201D; in the current directory from the snapshot with id \u0026#x201C;abcd\u0026#x201D;:\n$ plakar restore -to . abcd:notes.md Remove snapshots older than 30 days:\n$ plakar rm -before 30d September 9, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar/","section":"Docs","summary":"effortless backups","title":"plakar","type":"docs"},{"content":" PLAKAR-POLICY(1) General Commands Manual PLAKAR-POLICY(1) NAME plakar-policy \u0026#x2014; Manage Plakar retention policies\nSYNOPSIS plakar policy subcommand ... DESCRIPTION The plakar policy command manages the retention policies for plakar-prune(1).\nThe configuration consists in a set of named entries, each of them describing a retention policy.\nThe subcommands are as follows:\nadd name [option=value ...] Create a new source entry identified by name. Additional parameters can be set by adding option=value parameters. rm name Remove the policy identified by name from the configuration. set name [option=value ...] Set the option to value for the source identified by name. Multiple option/value pairs can be specified. show [-ini] [-json] [-yaml] [name ...] Display the current sources configuration. -ini, -json and -yaml control the output format, which is YAML by default. unset name [option ...] Remove the option for the policy identified by name. The available options as described in plakar-query(7): each option corresponds the similarly named flag.\nEXIT STATUS The plakar-policy utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Create a policy \u0026#x2018;weekly\u0026#x2019; that keeps one backup per week and discards backups older than three months:\n$ plakar policy add weekly $ plakar policy set weekly since='3 months' $ plakar policy set weekly per-week=1 Prune snapshots accordingly to the \u0026#x2018;weekly\u0026#x2019; policy:\n$ plakar prune -policy weekly SEE ALSO plakar(1), plakar-prune(1)\nSeptember 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-policy/","section":"Docs","summary":"Manage Plakar retention policies","title":"policy","type":"docs"},{"content":" PLAKAR-PRUNE(1) General Commands Manual PLAKAR-PRUNE(1) NAME plakar-prune \u0026#x2014; Prune snapshots according to a policy\nSYNOPSIS plakar prune [-apply] [-policy name] [snapshotID ...] DESCRIPTION The plakar prune command deletes snapshots from a Plakar repository. Snapshots can be filtered for deletion by age, by tag, or by specifying the snapshot IDs to remove. If no snapshotID are provided, either -older or -tag must be specified to filter the snapshots to delete.\nplakar prune supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe arguments are as follows:\n-apply Delete matching snapshot. The default is to just show the snapshot that would be removed but not actually execute the operation. -policy name Use the given policy. See plakar-policy(1) for how policies are managed. EXAMPLES Remove a specific snapshot by ID:\n$ plakar prune abc123 Remove snapshots older than 30 days:\n$ plakar prune -days 30d Remove snapshots with a specific tag:\n$ plakar prune -tag daily-backup Remove snapshots older than 1 year with a specific tag:\n$ plakar prune -years 1 -tag daily-backup DIAGNOSTICS The plakar-prune utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid date format or failure to delete a snapshot. SEE ALSO plakar(1), plakar-backup(1), plakar-policy(1), plakar-query(7)\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-prune/","section":"Docs","summary":"Prune snapshots according to a policy","title":"prune","type":"docs"},{"content":" PLAKAR-PTAR(1) General Commands Manual PLAKAR-PTAR(1) NAME plakar-ptar \u0026#x2014; generate a self-contained Kloset archive (.ptar)\nSYNOPSIS plakar ptar [-plaintext] [-overwrite] [-k location] -o file.ptar [path ...] DESCRIPTION The plakar ptar command creates a single portable archive (a \u0026#x2018;.ptar\u0026#x2019; file) that bundles one or more existing Plakar repositories (\u0026#x201C;klosets\u0026#x201D;) and/or arbitrary filesystem paths into a self-contained package. The resulting archive preserves repository metadata, snapshots and data chunks, and is compressed and encrypted for secure transport or off-site storage.\nAt least one data source must be supplied: either one or more -k or -kloset options naming remote or local kloset repositories, and/or one or more path arguments identifying files or directories to back up. The destination archive name is mandatory and supplied with -o.\nUnless the -overwrite flag is given, plakar ptar refuses to replace an existing archive.\nThe options are as follows:\n-plaintext Disable transparent encryption of the archive. If omitted, plakar ptar encrypts repository data using a key derived from the passphrase specified via PLAKAR_PASSPHRASE or prompted interactively. -overwrite Overwrite an existing .ptar file at the destination path. -k location, -kloset location Add a kloset repository to include in the archive. May be specified multiple times to bundle several repositories. -o file.ptar Path of the archive to create. This option is required. path ... Zero or more filesystem paths to back up directly into the archive. ENVIRONMENT PLAKAR_PASSPHRASE Passphrase used to derive the encryption key when encryption is enabled. DIAGNOSTICS The plakar-ptar utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred (invalid arguments, existing archive without -overwrite, hashing algorithm unknown, repository access failure, I/O errors, etc.). SEE ALSO plakar(1), plakar-backup(1), plakar-create(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-ptar/","section":"Docs","summary":"generate a self-contained Kloset archive (.ptar)","title":"ptar","type":"docs"},{"content":" PLAKAR-QUERY(7) Miscellaneous Information Manual PLAKAR-QUERY(7) NAME plakar-query \u0026#x2014; query flags shared among many Plakar subcommands\nDESCRIPTION What follows is a set of command line arguments that many plakar(1) subcommands provide to filter snapshots.\nThere are two kind of flags:\nmatchers These allow to select snapshots. If combined, the result is the union of the various matchers. filters These instead filter the output of the matchers by yielding snapshots matching only certain criterias. If combined, the result is the intersection of the various filters. If no matcher is given, all the snapshots are implicitly selected, and then filtered according to the given filters, if any.\nThe matchers are divided into:\nmatchers that select snapshots from the last n unit of time: -minutes n \u0026#x00A0; -hours n \u0026#x00A0; -days n \u0026#x00A0; -weeks n \u0026#x00A0; -months n \u0026#x00A0; -years n \u0026#x00A0; Or that selects snapshots that were done during the last n days of the week:\n-mondays n \u0026#x00A0; -thuesdays n \u0026#x00A0; -wednesdays n \u0026#x00A0; -thursdays n \u0026#x00A0; -fridays n \u0026#x00A0; -saturdays n \u0026#x00A0; -sundays n \u0026#x00A0; matchers that select at most n snapshots per time period: -per-minute n \u0026#x00A0; -per-hour n \u0026#x00A0; -per-day n \u0026#x00A0; -per-week n \u0026#x00A0; -per-month n \u0026#x00A0; -per-year n \u0026#x00A0; -per-monday n \u0026#x00A0; -per-thuesday n \u0026#x00A0; -per-wednesday n \u0026#x00A0; -per-thursday n \u0026#x00A0; -per-friday n \u0026#x00A0; -per-saturday n \u0026#x00A0; -per-sunday n \u0026#x00A0; The filters are:\n-before date Select snapshots older than given date. The date may be in RFC3339 format, as \u0026#x201C;YYYY-mm-DD HH:MM\u0026#x201D;, \u0026#x201C;YYYY-mm-DD HH:MM:SS\u0026#x201D;, \u0026#x201C;YYYY-mm-DD\u0026#x201D;, or \u0026#x201C;YYYY/mm/DD\u0026#x201D; where YYYY is a year, mm a month, DD a day, HH a hour in 24 hour format number, MM minutes and SS the number of seconds. Alternatively, human-style intervals like \u0026#x201C;half an hour\u0026#x201D;, \u0026#x201C;a month\u0026#x201D; or \u0026#x201C;2h30m\u0026#x201D; are also accepted.\n-category name Select snapshot whose category is name. -environment name Select snapshot whose environment is name. -job name Select snapshot whose job is name. -latest Select only the latest snapshot. -name name Select snapshots whose name is name. -perimeter name Select snapshots whose perimeter is name. -root path Select snapshots whose root directory is path. May be specified multiple time, snapshots are selected if any of the given paths matches. -since date Select snapshots newer than the given date. The accepted format is the same as -before. -tag name Select snapshots tagged with name. May be specified multiple times, and multiple tags may be given at the same time if comma-separated. September 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-query/","section":"Docs","summary":"query flags shared among many Plakar subcommands","title":"query","type":"docs"},{"content":" PLAKAR-RESTORE(1) General Commands Manual PLAKAR-RESTORE(1) NAME plakar-restore \u0026#x2014; Restore files from a Plakar snapshot\nSYNOPSIS plakar restore [-name name] [-category category] [-environment environment] [-perimeter perimeter] [-job job] [-tag tag] [-latest] [-before date] [-since date] [-concurrency number] [-quiet] [-to directory] [-skip-permissions] [snapshotID:path ...] DESCRIPTION The plakar restore command is used to restore files and directories at path from a specified Plakar snapshot to the local file system. If path is omitted, then all the files in the specified snapshotID are restored. If no snapshotID is provided, the command attempts to restore the current working directory from the last matching snapshot.\nThe options are as follows:\n-name string Only apply command to snapshots that match name. -category string Only apply command to snapshots that match category. -environment string Only apply command to snapshots that match environment. -perimeter string Only apply command to snapshots that match perimeter. -job string Only apply command to snapshots that match job. -tag string Only apply command to snapshots that match tag. -concurrency number Set the maximum number of parallel tasks for faster processing. Defaults to 8 * CPU count + 1. -skip-permissions Skip restoring file permissions and ownership during restore, defaulting to 0750 for directories and 0640 for files. It Fl to Ar directory Specify the base directory to which the files will be restored. If omitted, files are restored to the current working directory. -quiet Suppress output to standard input, only logging errors and warnings. EXAMPLES Restore all files from a specific snapshot to the current directory:\n$ plakar restore abc123 Restore to a specific directory:\n$ plakar restore -to /mnt/ abc123 Restore latest snapshot to a specific directory:\n$ plakar restore -latest -to /mnt/ abc123 Restore specific path to a specific directory:\n$ plakar restore -to /mnt/ abc123:/etc/apache2 Restore to a specific destination:\n$ plakar restore -to @s3target abc123 Restore specific path to a specific destination :\n$ plakar restore -to @s3target abc123:/etc/apache2 DIAGNOSTICS The plakar-restore utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as a failure to locate the snapshot or a destination directory issue. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-restore/","section":"Docs","summary":"Restore files from a Plakar snapshot","title":"restore","type":"docs"},{"content":" PLAKAR-RM(1) General Commands Manual PLAKAR-RM(1) NAME plakar-rm \u0026#x2014; Remove snapshots from a Plakar repository\nSYNOPSIS plakar rm [-name name] [-category category] [-environment environment] [-perimeter perimeter] [-job job] [-tag tag] [-latest] [-before date] [-since date] [snapshotID ...] DESCRIPTION The plakar rm command deletes snapshots from a Plakar repository. Snapshots can be filtered for deletion by age, by tag, or by specifying the snapshot IDs to remove. If no snapshotID are provided, either -older or -tag must be specified to filter the snapshots to delete.\nThe arguments are as follows:\n-name name Filter snapshots that match name. -category category Filter snapshots that match category. -environment environment Filter snapshots that match environment. -perimeter perimeter Filter snapshots that match perimeter. -job job Filter snapshots that match job. -tag tag Filter snapshots that match tag. -latest Filter latest snapshot matching filters. -before date Filter snapshots matching filters and older than the specified date. Accepted formats include relative durations (e.g. 2d for two days, 1w for one week) or specific dates in various formats (e.g. 2006-01-02 15:04:05). -since date Filter snapshots matching filters and created since the specified date, included. Accepted formats include relative durations (e.g. 2d for two days, 1w for one week) or specific dates in various formats (e.g. 2006-01-02 15:04:05). EXAMPLES Remove a specific snapshot by ID:\n$ plakar rm abc123 Remove snapshots older than 30 days:\n$ plakar rm -before 30d Remove snapshots with a specific tag:\n$ plakar rm -tag daily-backup Remove snapshots older than 1 year with a specific tag:\n$ plakar rm -before 1y -tag daily-backup DIAGNOSTICS The plakar-rm utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid date format or failure to delete a snapshot. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-rm/","section":"Docs","summary":"Remove snapshots from a Plakar repository","title":"rm","type":"docs"},{"content":" PLAKAR-SCHEDULER(1) General Commands Manual PLAKAR-SCHEDULER(1) NAME plakar-scheduler \u0026#x2014; Run the Plakar scheduler\nSYNOPSIS plakar scheduler [-foreground] [start -tasks configfile] [stop] DESCRIPTION The plakar scheduler runs in the background and manages task execution based on the defined schedule.\nThe options are as follows:\n-foreground Run the scheduler in the foreground instead of as a background service. -tasks configfile Specify the configuration file that contains the task definitions and schedules. start -tasks configfile Starts the scheduler service and its tasks from configfile. stop Stop the currently running scheduler service. DIAGNOSTICS The plakar-scheduler utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid parameters, inability to create the repository, or configuration issues. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-scheduler/","section":"Docs","summary":"Run the Plakar scheduler","title":"scheduler","type":"docs"},{"content":" PLAKAR-SERVER(1) General Commands Manual PLAKAR-SERVER(1) NAME plakar-server \u0026#x2014; Start a Plakar server\nSYNOPSIS plakar server [-allow-delete] [-listen [host]:port] DESCRIPTION The plakar server command starts a Plakar server instance at the provided address, allowing remote interaction with a Kloset store over a network.\nThe options are as follows:\n-allow-delete Enable delete operations. By default, delete operations are disabled to prevent accidental data loss. -listen [host]:port The host and port where to listen to, separated by a colon. The host name is optional, and defaults to all available addresses. If -listen is not provided, the server defaults to listen on localhost at port 9876. EXAMPLES Start a plakar server on the local store:\n$ plakar server Start a plakar server on a remote store:\n$ plakar at sftp://example.org server Start a server on a specific address and port:\n$ plakar server -listen 127.0.0.1:12345 DIAGNOSTICS The plakar-server utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nCAVEATS When a host name is provided, plakar server uses only one of the IP addresses it resolves to, preferably IPv4 .\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-server/","section":"Docs","summary":"Start a Plakar server","title":"server","type":"docs"},{"content":" PLAKAR-SERVICE(1) General Commands Manual PLAKAR-SERVICE(1) NAME plakar-service \u0026#x2014; Manage optional Plakar-connected services\nSYNOPSIS plakar service list plakar service add name [key=value ...] plakar service rm name plakar service status name plakar service show name plakar service enable name plakar service disable name plakar service set name [key=value ...] plakar service unset name [key ...] DESCRIPTION The plakar service command allows you to enable, disable, and inspect additional services that integrate with the plakar platform via plakar-login(1) authentication. These services connect to the plakar.io infrastructure, and should only be enabled if you agree to transmit non-sensitive operational data to plakar.io.\nAll subcommands require prior authentication via plakar-login(1).\nServices are managed by the backend and discovered at runtime. For example, when the \u0026#x201C;alerting\u0026#x201D; service is enable, it will:\nSend email notifications when operations fail. Expose the latest alerting reports in the Plakar UI (see plakar-ui(1)). By default, all services are disabled.\nSUBCOMMANDS list Display the list of available services. add name [key=value ...] Set the configuration for the service identified by name and enable it. The configuration is defined by the given set of key/value pairs. The existing configuration, if any, is discarded. rm name Disable the service identified by name and discard its configuration. status name Display the current status (enabled or disabled) of the named service. show name Display the configuration for the specified service. enable name Enable the specified service. disable name Disable the specified service. set name [key=value ...] Set the configuration key to value for the service identified by name. Multiple key/value pairs can be specified. unset name [key ...] Unset the configuration key for the service identified by name. Multiple keys can be specified. EXAMPLES Check the status of the alerting service:\n$ plakar service status alerting Enable alerting:\n$ plakar service enable alerting Disable alerting:\n$ plakar service disable alerting SEE ALSO plakar-login(1), plakar-ui(1)\nAugust 7, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-service/","section":"Docs","summary":"Manage optional Plakar-connected services","title":"service","type":"docs"},{"content":" PLAKAR-SOURCE(1) General Commands Manual PLAKAR-SOURCE(1) NAME plakar-source \u0026#x2014; Manage Plakar backup source configuration\nSYNOPSIS plakar source subcommand ... DESCRIPTION The plakar source command manages the configuration of data sources for Plakar to backup.\nThe configuration consists in a set of named entries, each of them describing a source for a backup operation.\nA source is defined by at least a location, specifying the importer to use, and some importer-specific parameters.\nThe subcommands are as follows:\nadd name location [option=value ...] Create a new source entry identified by name with the specified location describing the importer to use. Additional importer options can be set by adding option=value parameters. check name Check wether the importer for the source identified by name is properly configured. import [-config location] [-overwrite] [-rclone] [sections ...] Import source configurations from various sources including files, piped input, or rclone configurations. By default, reads from stdin, allowing for piped input from other commands.\nThe -config option specifies a file or URL to read the configuration from.\nThe -overwrite option allows overwriting existing source configurations with the same names.\nThe -rclone option treats the input as an rclone configuration, useful for importing rclone remotes as Plakar sources.\nSpecific sections can be imported by listing their names.\nSections can be renamed during import by appending :newname.\nFor detailed examples and usage patterns, see the https://plakar.io/docs/v1.0.5/guides/importing-configurations/ Importing Configurations guide.\nping name Try to open the data source identified by name to make sure it is reachable. rm name Remove the source identified by name from the configuration. set name [option=value ...] Set the option to value for the source identified by name. Multiple option/value pairs can be specified. show [-secrets] [name ...] Display the current sources configuration. If -secrets is specified, sensitive information such as passwords or tokens will be shown. unset name [option ...] Remove the option for the source entry identified by name. EXIT STATUS The plakar-source utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nSeptember 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-source/","section":"Docs","summary":"Manage Plakar backup source configuration","title":"source","type":"docs"},{"content":" PLAKAR-STORE(1) General Commands Manual PLAKAR-STORE(1) NAME plakar-store \u0026#x2014; Manage Plakar store configurations\nSYNOPSIS plakar store subcommand ... DESCRIPTION The plakar store command manages the Plakar store configurations.\nThe configuration consists in a set of named entries, each of them describing a Plakar store holding backups.\nA store is defined by at least a location, specifying the storage implementation to use, and some storage-specific parameters.\nThe subcommands are as follows:\nadd name location [option=value ...] Create a new store entry identified by name with the specified location. Specific additional configuration parameters can be set by adding option=value parameters. check name Check wether the store identified by name is properly configured. import [-config location] [-overwrite] [-rclone] [sections ...] Import store configurations from various sources including files, piped input, or rclone configurations. By default, reads from stdin, allowing for piped input from other commands.\nThe -config option specifies a file or URL to read the configuration from.\nThe -overwrite option allows overwriting existing store configurations with the same names.\nThe -rclone option treats the input as an rclone configuration, useful for importing rclone remotes as Plakar stores.\nSpecific sections can be imported by listing their names.\nSections can be renamed during import by appending :newname.\nFor detailed examples and usage patterns, see the https://plakar.io/docs/v1.0.5/guides/importing-configurations/ Importing Configurations guide.\nping name Try to connect to the store identified by name to make sure it is reachable. rm name Remove the store identified by name from the configuration. set name [option=value ...] Set the option to value for the store identified by name. Multiple option/value pairs can be specified. show [-secrets] [name ...] Display the current stores configuration. If -secrets is specified, sensitive information such as passwords or tokens will be shown. unset name [option ...] Remove the option for the store entry identified by name. DIAGNOSTICS The plakar-store utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nSeptember 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-store/","section":"Docs","summary":"Manage Plakar store configurations","title":"store","type":"docs"},{"content":" PLAKAR-SYNC(1) General Commands Manual PLAKAR-SYNC(1) NAME plakar-sync \u0026#x2014; Synchronize snapshots between Plakar repositories\nSYNOPSIS plakar sync [-packfiles path] [snapshotID] to | from | with repository DESCRIPTION The plakar sync command synchronize snapshots between two Plakar repositories. If a specific snapshot ID is provided, only snapshots with matching IDs will be synchronized.\nplakar sync supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-packfiles path Path where to put the temporary packfiles instead of building them in memory. If the special value \u0026#x2018;memory\u0026#x2019; is specified then the packfiles are build in memory (the default value) The arguments are as follows:\nto | from | with Specifies the direction of synchronization: to Synchronize snapshots from the local repository to the specified peer repository. from Synchronize snapshots from the specified peer repository to the local repository. with Synchronize snapshots in both directions, ensuring both repositories are fully synchronized. repository Path to the peer repository to synchronize with. EXAMPLES Synchronize the snapshot \u0026#x2018;abcd\u0026#x2019; with a peer repository:\n$ plakar sync abcd to @peer Bi-directional synchronization with peer repository of recent snapshots:\n$ plakar sync -since 7d with @peer Synchronize all snapshots of @peer to @repo:\n$ plakar at @repo sync from @peer DIAGNOSTICS The plakar-sync utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 General failure occurred, such as an invalid repository path, snapshot ID mismatch, or network error. SEE ALSO plakar(1), plakar-query(7)\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-sync/","section":"Docs","summary":"Synchronize snapshots between Plakar repositories","title":"sync","type":"docs"},{"content":" PLAKAR-TOKEN(1) General Commands Manual PLAKAR-TOKEN(1) NAME plakar-token \u0026#x2014; Manage Plakar tokens\nSYNOPSIS plakar token [create] DESCRIPTION The plakar token command manages tokens used to authenticate to Plakar services. Tokens are not currently usable and exist only for future features.\nSUBCOMMANDS create Create a new token. SEE ALSO plakar(1)\nAugust 6, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-token/","section":"Docs","summary":"Manage Plakar tokens","title":"token","type":"docs"},{"content":" PLAKAR-UI(1) General Commands Manual PLAKAR-UI(1) NAME plakar-ui \u0026#x2014; Serve the Plakar web user interface\nSYNOPSIS plakar ui [-addr address] [-cors] [-no-auth] [-no-spawn] DESCRIPTION The plakar ui command serves the Plakar web user interface. By default, it opens the default web browser.\nThe options are as follows:\n-addr address Specify the address and port for the UI to listen on separated by a colon, (e.g. localhost:8080). If omitted, plakar ui listens on localhost on a random port. -cors Set the \u0026#x2018;Access-Control-Allow-Origin\u0026#x2019; HTTP headers to allow the UI to be accessed from any origin. -no-auth Disable the authentication token that otherwise is needed to consume the exposed HTTP APIs. -no-spawn Do not automatically open the web browser. EXAMPLES Using a custom address and disable automatic browser execution:\n$ plakar ui -addr localhost:9090 -no-spawn DIAGNOSTICS The plakar-ui utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 A general error occurred, such as an inability to launch the UI or bind to the specified address. SEE ALSO plakar(1)\nAugust 6, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-ui/","section":"Docs","summary":"Serve the Plakar web user interface","title":"ui","type":"docs"},{"content":" PLAKAR-VERSION(1) General Commands Manual PLAKAR-VERSION(1) NAME plakar-version \u0026#x2014; Display the current Plakar version\nSYNOPSIS plakar version DESCRIPTION The plakar version command displays the current version of Plakar.\nSEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.0.5/references/commands/plakar-version/","section":"Docs","summary":"Display the current Plakar version","title":"version","type":"docs"},{"content":" PLAKAR-ARCHIVE(1) General Commands Manual PLAKAR-ARCHIVE(1) NAME plakar-archive \u0026#x2014; Create an archive from a Plakar snapshot\nSYNOPSIS plakar archive [-format type] [-output archive] [-rebase] snapshotID:path DESCRIPTION The plakar archive command creates an archive of the given type from the contents at path of a specified Plakar snapshot, or all the files if no path is given.\nThe options are as follows:\n-format type Specify the archive format. Supported formats are: tar Creates a tar file. tarball Creates a compressed tar.gz file. zip Creates a zip archive. -output pathname Specify the output path for the archive file. If omitted, the archive is created with a default name based on the current date and time. -rebase Strip the leading path from archived files, useful for creating \u0026quot;flat\u0026quot; archives without nested directories. EXAMPLES Create a tarball of the entire snapshot:\n$ plakar archive -output backup.tar.gz -format tarball abc123 Create a zip archive of a specific directory within a snapshot:\n$ plakar archive -output dir.zip -format zip abc123:/var/www Archive with rebasing to remove directory structure:\n$ plakar archive -rebase -format tar abc123 DIAGNOSTICS The plakar-archive utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as unsupported format, missing files, or permission issues. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-archive/","section":"Docs","summary":"Create an archive from a Plakar snapshot","title":"archive","type":"docs"},{"content":" PLAKAR-BACKUP(1) General Commands Manual PLAKAR-BACKUP(1) NAME plakar-backup \u0026#x2014; Create a new snapshot in a Kloset store\nSYNOPSIS plakar backup [-force-timestamp timestamp] [-ignore pattern] [-ignore-file file] [-check] [-dry-run] [-no-xattr] [-o option=value] [-packfiles path] [-quiet] [-silent] [-tag tag] [place] DESCRIPTION The plakar backup command creates a new snapshot of place, or the current directory. Snapshots can be filtered to ignore specific files or directories based on patterns provided through options.\nplace can be either a path, an URI, or a label with the form \u0026#x201C;@name\u0026#x201D; to reference a source connector configured with plakar-source(1).\nThe options are as follows:\n-force-timestamp timestamp Specify a fixed timestamp (in ISO 8601 or relative human format) to use for the snapshot. Could be used to reimport an existing backup with the same timestamp. -ignore pattern Specify individual gitignore exclusion patterns to ignore files or directories in the backup. This option can be repeated. -ignore-file file Specify a file containing gitignore exclusion patterns, one per line, to ignore files or directories in the backup. -check Perform a full check on the backup after success. -dry-run Do not write a snapshot; instead, perform a dry run by outputting the list of files and directories that would be included in the backup. Respects all exclude patterns and other options, but makes no changes to the Kloset store. -no-xattr Skip extended attributes (xattrs) when creating the backup. -o option=value Can be used to pass extra arguments to the source connector. The given option takes precedence over the configuration file. -quiet Suppress output to standard input, only logging errors and warnings. -packfiles path Path where to put the temporary packfiles instead of building them in the default temporary directory. If the special value \u0026#x2018;memory\u0026#x2019; is specified then the packfiles are built in memory. -silent Suppress all output. -tag tag Comma-separated list of tags to apply to the snapshot. ENVIRONMENT PLAKAR_TAGS Comma-separated list of tags to apply to the snapshot during backup. Overridden by the -tag command-line flag. EXAMPLES Create a snapshot of the current directory with two tags:\n$ plakar backup -tag daily-backup,production Ignore files using patterns in a given file:\n$ plakar backup -ignore-file ~/my-ignore-file /var/www or by using patterns specified inline:\n$ plakar backup -ignore \u0026quot;*.tmp\u0026quot; -ignore \u0026quot;*.log\u0026quot; /var/www Pass an option to the importer, in this case to don't traverse mount points:\n$ plakar backup -o dont_traverse_fs=true / DIAGNOSTICS The plakar-backup utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully; a snapshot was created, but some items may have been skipped (for example, files without sufficient permissions). Run plakar-info(1) on the new snapshot to view any errors. \u0026gt;0 An error occurred, such as failure to access the Kloset store or issues with exclusion patterns. SEE ALSO plakar(1), plakar-source(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-backup/","section":"Docs","summary":"Create a new snapshot in a Kloset store","title":"backup","type":"docs"},{"content":" PLAKAR-CAT(1) General Commands Manual PLAKAR-CAT(1) NAME plakar-cat \u0026#x2014; Display file contents from a Plakar snapshot\nSYNOPSIS plakar cat [-decompress] [-highlight] snapshotID:path ... DESCRIPTION The plakar cat command outputs the contents of path within Plakar snapshots to the standard output. It can decompress compressed files and optionally apply syntax highlighting based on the file type.\nThe options are as follows:\n-decompress If set, Plakar attempts to decompress application/gzip files. -highlight Apply syntax highlighting to the output based on the file type. EXAMPLES Display a file's contents from a snapshot:\n$ plakar cat abc123:/etc/passwd Display a file with syntax highlighting:\n$ plakar cat -highlight abc123:/home/op/korpus/driver.sh DIAGNOSTICS The plakar-cat utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as failure to retrieve a file or decompress content. SEE ALSO plakar(1), plakar-backup(1)\nAugust 6, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-cat/","section":"Docs","summary":"Display file contents from a Plakar snapshot","title":"cat","type":"docs"},{"content":" PLAKAR-CHECK(1) General Commands Manual PLAKAR-CHECK(1) NAME plakar-check \u0026#x2014; Check data integrity in a Plakar repository\nSYNOPSIS plakar check [-fast] [-no-verify] [-quiet] [snapshotID:path ...] DESCRIPTION The plakar check command verifies the integrity of data in a Plakar repository. It checks the given paths inside the snapshots for consistency and validates file macs to ensure no corruption has occurred, or all the data in the repository if no snapshotID or location flags is given.\nIn addition to the flags described below, plakar check supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-fast Enable a faster check that skips mac verification. This option performs only structural validation without confirming data integrity. -no-verify Disable signature verification. This option allows to proceed with checking snapshot integrity regardless of an invalid snapshot signature. -quiet Suppress output to standard output, only logging errors and warnings. EXAMPLES Perform a full integrity check on all snapshots:\n$ plakar check Perform a fast check on specific paths of two snapshot:\n$ plakar check -fast abc123:/etc/passwd def456:/var/www DIAGNOSTICS The plakar-check utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully with no integrity issues found. \u0026gt;0 An error occurred, such as corruption detected in a snapshot or failure to check data integrity. SEE ALSO plakar(1), plakar-query(7)\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-check/","section":"Docs","summary":"Check data integrity in a Plakar repository","title":"check","type":"docs"},{"content":" PLAKAR-CREATE(1) General Commands Manual PLAKAR-CREATE(1) NAME plakar-create \u0026#x2014; Create a new Plakar repository\nSYNOPSIS plakar create [-plaintext] DESCRIPTION The plakar create command creates a new Plakar repository at the specified path which defaults to ~/.plakar.\nThe options are as follows:\n-plaintext Disable transparent encryption for the repository. If specified, the repository will not use encryption. ENVIRONMENT PLAKAR_PASSPHRASE Repository encryption password. DIAGNOSTICS The plakar-create utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid parameters, inability to create the repository, or configuration issues. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-create/","section":"Docs","summary":"Create a new Plakar repository","title":"create","type":"docs"},{"content":" PLAKAR-DESTINATION(1) General Commands Manual PLAKAR-DESTINATION(1) NAME plakar-destination \u0026#x2014; Manage Plakar restore destination configuration\nSYNOPSIS plakar destination subcommand ... DESCRIPTION The plakar destination command manages the configuration of destinations where Plakar will restore.\nThe configuration consists in a set of named entries, each of them describing a destination where a restore operation may happen.\nA destination is defined by at least a location, specifying the exporter to use, and some exporter-specific parameters.\nThe subcommands are as follows:\nadd name location [option=value ...] Create a new destination entry identified by name with the specified location describing the exporter to use. Additional exporter options can be set by adding option=value parameters. check name Check wether the exporter for the destination identified by name is properly configured. import [-config location] [-overwrite] [-rclone] [sections ...] Import destination configurations from various sources including files, piped input, or rclone configurations. By default, reads from stdin, allowing for piped input from other commands like plakar source show.\nThe -config option specifies a file or URL to read the configuration from.\nThe -overwrite option allows overwriting existing destination configurations with the same names.\nThe -rclone option treats the input as an rclone configuration, useful for importing rclone remotes as Plakar destinations.\nSpecific sections can be imported by listing their names.\nSections can be renamed during import by appending :newname.\nFor detailed examples and usage patterns, see the https://plakar.io/docs/v1.1.0/guides/importing-configurations/ Importing Configurations guide.\nping name Try to open the destination identified by name to make sure it is reachable. rm name Remove the destination identified by name from the configuration. set name [option=value ...] Set the option to value for the destination identified by name. Multiple option/value pairs can be specified. show [-secrets] [name ...] Display the current destinations configuration. If -secrets is specified, sensitive information such as passwords or tokens will be shown. unset name [option ...] Remove the option for the destination entry identified by name. EXIT STATUS The plakar-destination utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nSeptember 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-destination/","section":"Docs","summary":"Manage Plakar restore destination configuration","title":"destination","type":"docs"},{"content":" PLAKAR-DIAG(1) General Commands Manual PLAKAR-DIAG(1) NAME plakar-diag \u0026#x2014; Display detailed information about Plakar internal structures\nSYNOPSIS plakar diag [contenttype | locks | object | packfile | snapshot | state | vfs | xattr] DESCRIPTION The plakar diag command provides detailed information about various internal data structures. The type of information displayed depends on the specified argument. Without any arguments, display information about the repository.\nThe sub-commands are as follows:\ncontenttype snapshotID:path \u0026#x00A0; locks Display the list of locks currently held on the repository. object objectID Display information about a specific object, including its mac, type, tags, and associated data chunks. packfile packfileID Show details of packfiles, including entries and macs, which store object data within the repository. snapshot snapshotID Show detailed information about a specific snapshot, including its metadata, directory and file count, and size. state List or describe the states in the repository. vfs snapshotID:path Show filesystem (VFS) details for a specific path within a snapshot, listing directory or file attributes, including permissions, ownership, and custom metadata. xattr snapshotID:path \u0026#x00A0; EXAMPLES Show repository information:\n$ plakar diag Show detailed information for a snapshot:\n$ plakar diag snapshot abc123 List all states in the repository:\n$ plakar diag state Display a specific object within a snapshot:\n$ plakar diag object 1234567890abcdef Display filesystem details for a path within a snapshot:\n$ plakar diag vfs abc123:/etc/passwd DIAGNOSTICS The plakar-diag utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as an invalid snapshot or object ID, or a failure to retrieve the requested data. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-diag/","section":"Docs","summary":"Display detailed information about Plakar internal structures","title":"diag","type":"docs"},{"content":" PLAKAR-DIFF(1) General Commands Manual PLAKAR-DIFF(1) NAME plakar-diff \u0026#x2014; Show differences between files in a Plakar snapshots\nSYNOPSIS plakar diff [-highlight] [-recursive] snapshotID1[:path1] snapshotID2[:path2] DESCRIPTION The plakar diff command compares two Plakar snapshots, optionally restricting to specific files within them. If only snapshot IDs are provided, it compares the root directories of each snapshot. If file paths are specified, the command compares the individual files. The diff output is shown in unified diff format, with an option to highlight differences.\nThe options are as follows:\n-highlight Apply syntax highlighting to the diff output for readability. -recursive When comparing directories, recursively compare all subdirectories. EXAMPLES Compare root directories of two snapshots:\n$ plakar diff abc123 def456 Compare across snapshots with highlighting: /etc/passwd\n$ plakar diff -highlight abc123:/etc/passwd def456:/etc/passwd DIAGNOSTICS The plakar-diff utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid snapshot IDs, missing files, or an unsupported file type. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-diff/","section":"Docs","summary":"Show differences between files in a Plakar snapshots","title":"diff","type":"docs"},{"content":" PLAKAR-DIGEST(1) General Commands Manual PLAKAR-DIGEST(1) NAME plakar-digest \u0026#x2014; Compute digests for files in a Plakar snapshot\nSYNOPSIS plakar digest [-hashing algorithm] snapshotID[:path] [...] DESCRIPTION The plakar digest command computes and displays digests for specified path in a the given snapshotID. Multiple snapshotID and path may be given. By default, the command computes the digest by reading the file contents.\nThe options are as follows:\n-hashing algorithm Use algorithm to compute the digest. Defaults to SHA256. EXAMPLES Compute the digest of a file within a snapshot:\n$ plakar digest abc123:/etc/passwd Use BLAKE3 as the digest algorithm:\n$ plakar digest -hashing BLAKE3 abc123:/etc/netstart DIAGNOSTICS The plakar-digest utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as failure to retrieve a file digest or invalid snapshot ID. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-digest/","section":"Docs","summary":"Compute digests for files in a Plakar snapshot","title":"digest","type":"docs"},{"content":" PLAKAR-DUP(1) General Commands Manual PLAKAR-DUP(1) NAME plakar-dup \u0026#x2014; Duplicates an existing snapshot with a different ID\nSYNOPSIS plakar dup DESCRIPTION The plakar dup command creates a duplicate of an existing snapshot with a new snapshot ID. The new snapshot is an exact copy of the original, including all files and metadata.\nEXAMPLES Create a duplicate of a snapshot with ID \u0026quot;abc123\u0026quot;:\n$ plakar dup abc123 DIAGNOSTICS The plakar-dup utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as failure to retrieve existing snapshot or invalid snapshot ID. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-dup/","section":"Docs","summary":"Duplicates an existing snapshot with a different ID","title":"dup","type":"docs"},{"content":" PLAKAR-INFO(1) General Commands Manual PLAKAR-INFO(1) NAME plakar-info \u0026#x2014; Display detailed information about internal structures\nSYNOPSIS plakar info [-errors] [snapshot] DESCRIPTION The plakar info command provides detailed information about a Plakar repository and snapshots. The type of information displayed depends on the specified argument. Without any arguments, display information about the repository.\nThe options are as follows:\n-errors Show errors within the specified snapshot. EXAMPLES Show repository information:\n$ plakar info Show detailed information for a snapshot:\n$ plakar info abc123 Show errors within a snapshot:\n$ plakar info -errors abc123 DIAGNOSTICS The plakar-info utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as an invalid snapshot or object ID, or a failure to retrieve the requested data. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-info/","section":"Docs","summary":"Display detailed information about internal structures","title":"info","type":"docs"},{"content":" PLAKAR-LOCATE(1) General Commands Manual PLAKAR-LOCATE(1) NAME plakar-locate \u0026#x2014; Find filenames in a Plakar snapshot\nSYNOPSIS plakar locate [-snapshot snapshotID] patterns ... DESCRIPTION The plakar locate command search snapshots to find file names matching any of the given patterns and prints the abbreviated snapshot ID and the full path of the matched files. Matching works according to the shell globbing rules.\nIf no -snapshot nor location flags are given, plakar locate will search in all snapshots.\nIn addition to the flags described below, plakar locate supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-snapshot snapshotID Limit the search to the given snapshot. EXAMPLES Search for files ending in \u0026#x201C;wd\u0026#x201D;:\n$ plakar locate '*wd' abc123:/etc/master.passwd abc123:/etc/passwd DIAGNOSTICS The plakar-locate utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid parameters, inability to create the repository, or configuration issues. SEE ALSO plakar(1), plakar-backup(1), plakar-query(7)\nCAVEATS The patterns may have to be quoted to avoid the shell attempting to expand them.\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-locate/","section":"Docs","summary":"Find filenames in a Plakar snapshot","title":"locate","type":"docs"},{"content":" PLAKAR-LOGIN(1) General Commands Manual PLAKAR-LOGIN(1) NAME plakar-login \u0026#x2014; Authenticate to Plakar services\nSYNOPSIS plakar login [-no-spawn] [-status] [-email email | -env | -github] DESCRIPTION The plakar login command initiates an authentication flow with the Plakar platform. Login is optional for most plakar commands but required to enable certain services, such as alerting. See also plakar-service(1).\nOnly one authentication method may be specified per invocation: the -email, -env, and -github options are mutually exclusive. If neither is provided, -github is assumed.\nThe options are as follows:\n-email email Send a login link to the specified email address. Clicking the link in the received email will authenticate plakar. -env Persist the value of the PLAKAR_TOKEN environment variable into the configuration. Generate this token with plakar-token(1). -github Use GitHub OAuth to authenticate. A browser will be spawned to initiate the OAuth flow unless -no-spawn is specified. -no-spawn Do not automatically open a browser window for authentication flows. -status Check wether the user is currently logged in. This option cannot be used with any other options. EXAMPLES Start a login via email:\n$ plakar login -email user@example.com Authenticate via GitHub (default, opens browser):\n$ plakar login SEE ALSO plakar(1), plakar-logout(1), plakar-service(1)\nJuly 8, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-login/","section":"Docs","summary":"Authenticate to Plakar services","title":"login","type":"docs"},{"content":" PLAKAR-LOGOUT(1) General Commands Manual PLAKAR-LOGOUT(1) NAME plakar-logout \u0026#x2014; Log out from Plakar services\nSYNOPSIS plakar logout DESCRIPTION The plakar logout command logs out an authenticated session with the Plakar platform.\nEXAMPLES Log out from the current session:\n$ plakar logout SEE ALSO plakar(1), plakar-login(1), plakar-service(1)\nJuly 8, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-logout/","section":"Docs","summary":"Log out from Plakar services","title":"logout","type":"docs"},{"content":" PLAKAR-LS(1) General Commands Manual PLAKAR-LS(1) NAME plakar-ls \u0026#x2014; List snapshots and their contents in a Plakar repository\nSYNOPSIS plakar ls [-uuid] [-recursive] [snapshotID:path] DESCRIPTION The plakar ls command lists snapshots stored in a Plakar repository, and optionally displays the contents of path in a specified snapshot.\nIn addition to the flags described below, plakar ls supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-uuid Display the full UUID for each snapshot instead of the shorter snapshot ID. -recursive List directory contents recursively when exploring snapshot contents. EXAMPLES List all snapshots with their short IDs:\n$ plakar ls List all snapshots with UUIDs instead of short IDs:\n$ plakar ls -uuid List snapshots with a specific tag:\n$ plakar ls -tag daily-backup List contents of a specific snapshot:\n$ plakar ls abc123 Recursively list contents of a specific snapshot:\n$ plakar ls -recursive abc123:/etc DIAGNOSTICS The plakar-ls utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as failure to retrieve snapshot information or invalid snapshot ID. SEE ALSO plakar(1), plakar-query(7)\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-ls/","section":"Docs","summary":"List snapshots and their contents in a Plakar repository","title":"ls","type":"docs"},{"content":" PLAKAR-MAINTENANCE(1) General Commands Manual PLAKAR-MAINTENANCE(1) NAME plakar-maintenance \u0026#x2014; Remove unused data from a Plakar repository\nSYNOPSIS plakar maintenance DESCRIPTION The plakar maintenance command removes unused blobs, objects, and chunks from a Plakar repository to reduce storage space. It identifies unreferenced data and reorganizes packfiles to ensure only active snapshots and their dependencies are retained. The maintenance process updates snapshot indexes to reflect these changes.\nDIAGNOSTICS The plakar-maintenance utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred during maintenance, such as failure to update indexes or remove data. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-maintenance/","section":"Docs","summary":"Remove unused data from a Plakar repository","title":"maintenance","type":"docs"},{"content":" PLAKAR-MOUNT(1) General Commands Manual PLAKAR-MOUNT(1) NAME plakar-mount \u0026#x2014; Mount Plakar snapshots as read-only filesystem\nSYNOPSIS plakar mount mountpoint DESCRIPTION The plakar mount command mounts a Plakar repository snapshot as a read-only filesystem at the specified mountpoint. This allows users to access snapshot contents as if they were part of the local file system, providing easy browsing and retrieval of files without needing to explicitly restore them. This command may not work on all Operating Systems.\nEXAMPLES Mount a snapshot to the specified directory:\n$ plakar mount ~/mnt DIAGNOSTICS The plakar-mount utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as an invalid mountpoint or failure during the mounting process. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-mount/","section":"Docs","summary":"Mount Plakar snapshots as read-only filesystem","title":"mount","type":"docs"},{"content":" PLAKAR-PKG-ADD(1) General Commands Manual PLAKAR-PKG-ADD(1) NAME plakar-pkg-add \u0026#x2014; Install Plakar plugins\nSYNOPSIS plakar pkg add plugin ... DESCRIPTION The plakar pkg add command adds a local or a remote plugin.\nIf plugin matches an existing local file, it is installed directly. Otherwise, it is treated as a recipe name and downloaded from the Plakar plugin server which requires a login via the plakar-login(1) command.\nInstalling plugins without logging in is possible via the plakar-pkg-build(1) command, provided you have the necessary dependencies to build it locally (currently, official plugins require make and a working Go toolchain).\nTo force local resolution use an absolute path, otherwise to force remote fetching pass an HTTP or HTTPS URL.\nFILES ~/.cache/plakar/plugins/ Plugin cache directory. Respects XDG_CACHE_HOME if set. ~/.local/share/plakar/plugins Plugin directory. Respects XDG_DATA_HOME if set. SEE ALSO plakar-login(1), plakar-pkg-build(1), plakar-pkg-create(1), plakar-pkg-rm(1), plakar-pkg-show(1)\nNovember 27, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-pkg-add/","section":"Docs","summary":"Install Plakar plugins","title":"pkg-add","type":"docs"},{"content":" PLAKAR-PKG-BUILD(1) General Commands Manual PLAKAR-PKG-BUILD(1) NAME plakar-pkg-build \u0026#x2014; Build Plakar plugins from source\nSYNOPSIS plakar pkg build recipe.yaml DESCRIPTION The plakar pkg build fetches the sources and builds the plugin as specified in the given plakar-pkg-recipe.yaml(5). If it builds successfully, the resulting plugin will be created in the current working directory.\nENVIRONMENT PLAKAR_CLONE_TOKEN If set, this token will be used to authenticate git clone operations. This is useful for cloning private repositories. FILES ~/.cache/plakar/plugins/ Plugin cache directory. Respects XDG_CACHE_HOME if set. ~/.local/share/plakar/plugins Plugin directory. Respects XDG_DATA_HOME if set. SEE ALSO plakar-pkg-add(1), plakar-pkg-create(1), plakar-pkg-rm(1), plakar-pkg-show(1), plakar-pkg-recipe.yaml(5)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-pkg-build/","section":"Docs","summary":"Build Plakar plugins from source","title":"pkg-build","type":"docs"},{"content":" PLAKAR-PKG-CREATE(1) General Commands Manual PLAKAR-PKG-CREATE(1) NAME plakar-pkg-create \u0026#x2014; Package a plugin\nSYNOPSIS plakar pkg build manifest.yaml version DESCRIPTION The plakar pkg create assembles a plugin using the provided plakar-pkg-manifest.yaml(5) and version.\nAll the files needed for the plugin need to be already available, i.e. executables must be already be built.\nAll external files must reside in the same directory as the manifest.yaml or in subdirectories.\nSEE ALSO plakar-pkg-add(1), plakar-pkg-build(1), plakar-pkg-rm(1), plakar-pkg-show(1), plakar-pkg-manifest.yaml(5)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-pkg-create/","section":"Docs","summary":"Package a plugin","title":"pkg-create","type":"docs"},{"content":" PLAKAR-PKG-MANIFEST.YAML(5) File Formats Manual PLAKAR-PKG-MANIFEST.YAML(5) NAME manifest.yaml \u0026#x2014; Manifest for plugin assemblation\nDESCRIPTION The manifest.yaml file format describes how to package a plugin. No build or compilation is done, so all executables and other files must be prepared beforehand.\nmanifest.yaml must have a top-level YAML object with the following fields:\nname The name of the plugins display_name The displayed name in the UI. description A short description of the connectors. homepage A link to the homepage. license The license of the connectors. tag A YAML array of strings for tags that describe the connectors. api_version The API version supported. version The plugin version, which doubles as the git tag as well. It must follow semantic versioning and have a \u0026#x2018;v\u0026#x2019; prefix, e.g. \u0026#x2018;v1.2.3\u0026#x2019;. connectors A YAML array of objects with the following properties: type The connector type, one of importer, exporter, or store. protocols An array of YAML strings containing all the protocols that the connector supports. location_flags An optional array of YAML strings describing some properties of the connector. These properties are: localfs Whether paths given to this connector have to be made absolute. file Whether this store backend handles a Kloset in a sigle file, for e.g. a ptar file. executable Path to the plugin executable. extra_file An optional array of YAML string. These are extra files that need to be included in the package. EXAMPLES A sample manifest for the \u0026#x201C;fs\u0026#x201D; plugin is as follows:\n# manifest.yaml name: fs display_name: file system connector description: file storage but as external plugin homepage: https://github.com/PlakarKorp/integration-fs license: ISC tags: [ fs, filesystem, \u0026quot;local files\u0026quot; ] api_version: 1.0.0 version: 1.0.0 connectors: - type: importer executable: fs-importer protocols: [fs] - type: exporter executable: fs-exporter protocols: [fs] - type: storage executable: fs-store protocols: [fs] SEE ALSO plakar-pkg-create(1)\nJuly 20, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-pkg-manifest.yaml/","section":"Docs","summary":"Manifest for plugin assemblation","title":"pkg-manifest.yaml","type":"docs"},{"content":" PLAKAR-PKG-RECIPE.YAML(5) File Formats Manual PLAKAR-PKG-RECIPE.YAML(5) NAME recipe.yaml \u0026#x2014; Recipe to build Plakar plugins from source\nDESCRIPTION The recipe.yaml file format describes how to fetch and build Plakar plugins. It must have a top-level YAML object with the following fields:\nname The name of the plugins version The plugin version, which doubles as the git tag as well. It must follow semantic versioning and have a \u0026#x2018;v\u0026#x2019; prefix, e.g. \u0026#x2018;v1.2.3\u0026#x2019;. repository URL to the git repository holding the plugin. EXAMPLES A sample recipe to build the \u0026#x201C;fs\u0026#x201D; plugin is as follows:\n# recipe.yaml name: fs version: v1.0.0 repository: https://github.com/PlakarKorp/integrations-fs SEE ALSO plakar-pkg-build(1)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-pkg-recipe.yaml/","section":"Docs","summary":"Recipe to build Plakar plugins from source","title":"pkg-recipe.yaml","type":"docs"},{"content":" PLAKAR-PKG-RM(1) General Commands Manual PLAKAR-PKG-RM(1) NAME plakar-pkg-rm \u0026#x2014; Uninstall Plakar plugins\nSYNOPSIS plakar pkg rm plugin ... DESCRIPTION The plakar pkg rm command removes plugins that have been previously installed with plakar-pkg-add(1) command.\nThe list of plugins can be obtained with plakar-pkg-show(1).\nEXAMPLES Removing a plugin:\n$ plakar pkg show epic-v1.2.3 $ plakar pkg rm epic-v1.2.3 SEE ALSO plakar-pkg-add(1), plakar-pkg-build(1), plakar-pkg-create(1), plakar-pkg-show(1)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-pkg-rm/","section":"Docs","summary":"Uninstall Plakar plugins","title":"pkg-rm","type":"docs"},{"content":" PLAKAR-PKG-SHOW(1) General Commands Manual PLAKAR-PKG-SHOW(1) NAME plakar-pkg-show \u0026#x2014; Show installed Plakar plugins\nSYNOPSIS plakar pkg show [-available] [-long] DESCRIPTION The plakar pkg show command shows the currently installed plugins.\nThe options are as follows:\n-available Instead of installed packages, show the set of prebuilt packages available for this system. -long Show the full package name. FILES ~/.cache/plakar/plugins/ Plugin cache directory. Respects XDG_CACHE_HOME if set. ~/.local/share/plakar/plugins Plugin directory. Respects XDG_DATA_HOME if set. SEE ALSO plakar-pkg-add(1), plakar-pkg-build(1), plakar-pkg-create(1), plakar-pkg-rm(1)\nJuly 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-pkg-show/","section":"Docs","summary":"Show installed Plakar plugins","title":"pkg-show","type":"docs"},{"content":" PLAKAR(1) General Commands Manual PLAKAR(1) NAME plakar \u0026#x2014; effortless backups\nSYNOPSIS plakar [-config dir] [-concurrency number] [-cpu number] [-keyfile path] [-quiet] [-trace subsystems] [at kloset] subcommand ... DESCRIPTION plakar is a tool to create distributed, versioned backups with compression, encryption, and data deduplication.\nBy default, plakar operates on the Kloset store at ~/.plakar. This can be changed either by using the at option.\nThe following options are available:\n-config dir Specify an alternate configuration directory. Defaults to ~/.config/plakar. -concurrency number Set the maximum number of parallel tasks for faster processing. Defaults to CPU count. -cpu number Limit the number of parallel workers plakar uses to number. By default it's the number of online CPUs. -keyfile path Read the passphrase from the key file at path instead of prompting. Overrides the PLAKAR_PASSPHRASE environment variable. -quiet Disable all output except for errors. -trace subsystems Display trace logs. subsystems is a comma-separated series of keywords to enable the trace logs for different subsystems: all, trace, repository, snapshot and server. at kloset Operates on the given kloset store. It could be a path, an URI, or a label in the form \u0026#x201C;@name\u0026#x201D; to reference a configuration created with plakar-store(1). The following commands are available:\narchive Create an archive from a Kloset snapshot, documented in plakar-archive(1). backup Create a new Kloset snapshot, documented in plakar-backup(1). cat Display file contents from a Kloset snapshot, documented in plakar-cat(1). check Check data integrity in a Kloset store, documented in plakar-check(1). create Create a new Kloset store, documented in plakar-create(1). destination Manage configurations for the destination connectors, documented in plakar-destination(1). diff Show differences between files in a Kloset snapshot, documented in plakar-diff(1). digest Compute digests for files in a Kloset snapshot, documented in plakar-digest(1). help Show this manpage and the ones for the subcommands. info Display detailed information about internal structures, documented in plakar-info(1). locate Find filenames in a Kloset snapshot, documented in plakar-locate(1). ls List snapshots and their contents in a Kloset store, documented in plakar-ls(1). maintenance Remove unused data from a Kloset store, documented in plakar-maintenance(1). mount Mount Kloset snapshots as a read-only filesystem, documented in plakar-mount(1). ptar Create a .ptar archive, documented in plakar-ptar(1). pkg show List installed plugins, documented in plakar-pkg-show(1). pkg add Install a plugin, documented in plakar-pkg-add(1). pkg build Build a plugin from source, documented in plakar-pkg-build(1). pkg create Package a plugin, documented in plakar-pkg-create(1). pkg rm Uninstall a plugin, documented in plakar-pkg-rm(1). restore Restore files from a Kloset snapshot, documented in plakar-restore(1). rm Remove snapshots from a Kloset store, documented in plakar-rm(1). server Start a Plakar server, documented in plakar-server(1). service Manage additional Plakar services that require you to be logged in, documented in plakar-service(1). source Manage configurations for the source connectors, documented in plakar-source(1). store Manage configurations for storage connectors, documented in plakar-store(1). sync Synchronize snapshots between Kloset stores, documented in plakar-sync(1). ui Serve the Plakar web user interface, documented in plakar-ui(1). version Display the current Plakar version, documented in plakar-version(1). ENVIRONMENT PLAKAR_PASSPHRASE Passphrase to unlock the Kloset store; overrides the one from the configuration. If set, plakar won't prompt to unlock. The option keyfile overrides this environment variable. PLAKAR_REPOSITORY Reference to the Kloset store. PLAKAR_TOKEN Token to authenticate for Plakar services. FILES ~/.cache/plakar Plakar cache directories. ~/.config/plakar/destinations.yml Restore destinations configuration. ~/.config/plakar/sources.yml Backup sources configuration. ~/.config/plakar/stores.yml Kloset stores configuration. ~/.plakar Default Kloset store location. EXAMPLES Create an encrypted Kloset store at the default location:\n$ plakar create Create an encrypted Kloset store on AWS S3:\n$ plakar store add mys3bucket \\ location=s3://s3.eu-west-3.amazonaws.com/backups \\ access_key=\u0026quot;access_key\u0026quot; \\ secret_access_key=\u0026quot;secret_key\u0026quot; $ plakar at @mys3bucket create Create a snapshot of the current directory on the @mys3bucket Kloset store:\n$ plakar at @mys3bucket backup List the snapshots of the default Kloset store:\n$ plakar ls Restore the file \u0026#x201C;notes.md\u0026#x201D; in the current directory from the snapshot with id \u0026#x201C;abcd\u0026#x201D;:\n$ plakar restore -to . abcd:notes.md Remove snapshots older than 30 days:\n$ plakar rm -before 30d December 9, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar/","section":"Docs","summary":"effortless backups","title":"plakar","type":"docs"},{"content":" PLAKAR-POLICY(1) General Commands Manual PLAKAR-POLICY(1) NAME plakar-policy \u0026#x2014; Manage Plakar retention policies\nSYNOPSIS plakar policy subcommand ... DESCRIPTION The plakar policy command manages the retention policies for plakar-prune(1).\nThe configuration consists in a set of named entries, each of them describing a retention policy.\nThe subcommands are as follows:\nadd name [option=value ...] Create a new source entry identified by name. Additional parameters can be set by adding option=value parameters. rm name Remove the policy identified by name from the configuration. set name [option=value ...] Set the option to value for the source identified by name. Multiple option/value pairs can be specified. show [-ini] [-json] [-yaml] [name ...] Display the current sources configuration. -ini, -json and -yaml control the output format, which is YAML by default. unset name [option ...] Remove the option for the policy identified by name. The available options as described in plakar-query(7): each option corresponds the similarly named flag.\nEXIT STATUS The plakar-policy utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nEXAMPLES Create a policy \u0026#x2018;weekly\u0026#x2019; that keeps one backup per week and discards backups older than three months:\n$ plakar policy add weekly $ plakar policy set weekly since='3 months' $ plakar policy set weekly per-week=1 Prune snapshots accordingly to the \u0026#x2018;weekly\u0026#x2019; policy:\n$ plakar prune -policy weekly SEE ALSO plakar(1), plakar-prune(1)\nSeptember 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-policy/","section":"Docs","summary":"Manage Plakar retention policies","title":"policy","type":"docs"},{"content":" PLAKAR-PRUNE(1) General Commands Manual PLAKAR-PRUNE(1) NAME plakar-prune \u0026#x2014; Prune snapshots according to a policy\nSYNOPSIS plakar prune [-apply] [-policy name] [snapshotID ...] DESCRIPTION The plakar prune command deletes snapshots from a Plakar repository. Snapshots can be filtered for deletion by age, by tag, or by specifying the snapshot IDs to remove. If no snapshotID are provided, either -older or -tag must be specified to filter the snapshots to delete.\nplakar prune supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe arguments are as follows:\n-apply Delete matching snapshot. The default is to just show the snapshot that would be removed but not actually execute the operation. -policy name Use the given policy. See plakar-policy(1) for how policies are managed. EXAMPLES Remove a specific snapshot by ID:\n$ plakar prune abc123 Remove snapshots older than 30 days:\n$ plakar prune -days 30 Remove snapshots with a specific tag:\n$ plakar prune -tag daily-backup Remove snapshots older than 1 year with a specific tag:\n$ plakar prune -years 1 -tag daily-backup DIAGNOSTICS The plakar-prune utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid date format or failure to delete a snapshot. SEE ALSO plakar(1), plakar-backup(1), plakar-policy(1), plakar-query(7)\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-prune/","section":"Docs","summary":"Prune snapshots according to a policy","title":"prune","type":"docs"},{"content":" PLAKAR-PTAR(1) General Commands Manual PLAKAR-PTAR(1) NAME plakar-ptar \u0026#x2014; generate a self-contained Kloset archive (.ptar)\nSYNOPSIS plakar ptar [-plaintext] [-overwrite] [-k location] -o file.ptar [path ...] DESCRIPTION The plakar ptar command creates a single portable archive (a \u0026#x2018;.ptar\u0026#x2019; file) that bundles one or more existing Plakar repositories (\u0026#x201C;klosets\u0026#x201D;) and/or arbitrary filesystem paths into a self-contained package. The resulting archive preserves repository metadata, snapshots and data chunks, and is compressed and encrypted for secure transport or off-site storage.\nAt least one data source must be supplied: either one or more -k or -kloset options naming remote or local kloset repositories, and/or one or more path arguments identifying files or directories to back up. The destination archive name is mandatory and supplied with -o.\nUnless the -overwrite flag is given, plakar ptar refuses to replace an existing archive.\nThe options are as follows:\n-plaintext Disable transparent encryption of the archive. If omitted, plakar ptar encrypts repository data using a key derived from the passphrase specified via PLAKAR_PASSPHRASE or prompted interactively. -overwrite Overwrite an existing .ptar file at the destination path. -k location, -kloset location Add a kloset repository to include in the archive. May be specified multiple times to bundle several repositories. -o file.ptar Path of the archive to create. This option is required. path ... Zero or more filesystem paths to back up directly into the archive. ENVIRONMENT PLAKAR_PASSPHRASE Passphrase used to derive the encryption key when encryption is enabled. DIAGNOSTICS The plakar-ptar utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred (invalid arguments, existing archive without -overwrite, hashing algorithm unknown, repository access failure, I/O errors, etc.). SEE ALSO plakar(1), plakar-backup(1), plakar-create(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-ptar/","section":"Docs","summary":"generate a self-contained Kloset archive (.ptar)","title":"ptar","type":"docs"},{"content":" PLAKAR-QUERY(7) Miscellaneous Information Manual PLAKAR-QUERY(7) NAME plakar-query \u0026#x2014; query flags shared among many Plakar subcommands\nDESCRIPTION What follows is a set of command line arguments that many plakar(1) subcommands provide to filter snapshots.\nThere are two kind of flags:\nmatchers These allow to select snapshots. If combined, the result is the union of the various matchers. filters These instead filter the output of the matchers by yielding snapshots matching only certain criterias. If combined, the result is the intersection of the various filters. If no matcher is given, all the snapshots are implicitly selected, and then filtered according to the given filters, if any.\nThe matchers are divided into:\nmatchers that select snapshots from the last n unit of time: -minutes n \u0026#x00A0; -hours n \u0026#x00A0; -days n \u0026#x00A0; -weeks n \u0026#x00A0; -months n \u0026#x00A0; -years n \u0026#x00A0; Or that selects snapshots that were done during the last n days of the week:\n-mondays n \u0026#x00A0; -thuesdays n \u0026#x00A0; -wednesdays n \u0026#x00A0; -thursdays n \u0026#x00A0; -fridays n \u0026#x00A0; -saturdays n \u0026#x00A0; -sundays n \u0026#x00A0; matchers that select at most n snapshots per time period: -per-minute n \u0026#x00A0; -per-hour n \u0026#x00A0; -per-day n \u0026#x00A0; -per-week n \u0026#x00A0; -per-month n \u0026#x00A0; -per-year n \u0026#x00A0; -per-monday n \u0026#x00A0; -per-thuesday n \u0026#x00A0; -per-wednesday n \u0026#x00A0; -per-thursday n \u0026#x00A0; -per-friday n \u0026#x00A0; -per-saturday n \u0026#x00A0; -per-sunday n \u0026#x00A0; The filters are:\n-before date Select snapshots older than given date. The date may be in RFC3339 format, as \u0026#x201C;YYYY-mm-DD HH:MM\u0026#x201D;, \u0026#x201C;YYYY-mm-DD HH:MM:SS\u0026#x201D;, \u0026#x201C;YYYY-mm-DD\u0026#x201D;, or \u0026#x201C;YYYY/mm/DD\u0026#x201D; where YYYY is a year, mm a month, DD a day, HH a hour in 24 hour format number, MM minutes and SS the number of seconds. Alternatively, human-style intervals like \u0026#x201C;half an hour\u0026#x201D;, \u0026#x201C;a month\u0026#x201D; or \u0026#x201C;2h30m\u0026#x201D; are also accepted.\n-category name Select snapshot whose category is name. -environment name Select snapshot whose environment is name. -job name Select snapshot whose job is name. -latest Select only the latest snapshot. -name name Select snapshots whose name is name. -perimeter name Select snapshots whose perimeter is name. -root path Select snapshots whose root directory is path. May be specified multiple time, snapshots are selected if any of the given paths matches. -since date Select snapshots newer than the given date. The accepted format is the same as -before. -tag name Select snapshots tagged with name. May be specified multiple times, and multiple tags may be given at the same time if comma-separated. If a tag name is prefixed with an exclamation mark \u0026#x2018;!\u0026#x2019;, the matching is inverted and the snapshot is ignored if it contains said tag. November 28, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-query/","section":"Docs","summary":"query flags shared among many Plakar subcommands","title":"query","type":"docs"},{"content":" PLAKAR-RESTORE(1) General Commands Manual PLAKAR-RESTORE(1) NAME plakar-restore \u0026#x2014; Restore files from a Plakar snapshot\nSYNOPSIS plakar restore [-before date] [-category category] [-environment environment] [-job job] [-latest] [-name name] [-perimeter perimeter] [-quiet] [-since date] [-skip-permissions] [-tag tag] [-to directory] [-o option=value] [snapshotID:path ...] DESCRIPTION The plakar restore command is used to restore files and directories at path from a specified Plakar snapshot to the local file system. If path is omitted, then all the files in the specified snapshotID are restored. If no snapshotID is provided, the command attempts to restore the current working directory from the last matching snapshot.\nThe options are as follows:\n-name string Only apply command to snapshots that match name. -category string Only apply command to snapshots that match category. -environment string Only apply command to snapshots that match environment. -perimeter string Only apply command to snapshots that match perimeter. -job string Only apply command to snapshots that match job. -tag string Only apply command to snapshots that match tag. -skip-permissions Skip restoring file permissions and ownership during restore, defaulting to 0750 for directories and 0640 for files. It Fl to Ar directory Specify the base directory to which the files will be restored. If omitted, files are restored to the current working directory. -o option=value Can be used to pass extra arguments to the destination connector. The given option takes precedence over the configuration file. -quiet Suppress output to standard input, only logging errors and warnings. EXAMPLES Restore all files from a specific snapshot to the current directory:\n$ plakar restore abc123 Restore to a specific directory:\n$ plakar restore -to /mnt/ abc123 Restore latest snapshot to a specific directory:\n$ plakar restore -latest -to /mnt/ abc123 Restore specific path to a specific directory:\n$ plakar restore -to /mnt/ abc123:/etc/apache2 Restore to a specific destination:\n$ plakar restore -to @s3target abc123 Restore specific path to a specific destination :\n$ plakar restore -to @s3target abc123:/etc/apache2 DIAGNOSTICS The plakar-restore utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as a failure to locate the snapshot or a destination directory issue. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-restore/","section":"Docs","summary":"Restore files from a Plakar snapshot","title":"restore","type":"docs"},{"content":" PLAKAR-RM(1) General Commands Manual PLAKAR-RM(1) NAME plakar-rm \u0026#x2014; Remove snapshots from a Plakar repository\nSYNOPSIS plakar rm [-name name] [-category category] [-environment environment] [-perimeter perimeter] [-job job] [-tag tag] [-latest] [-before date] [-since date] [snapshotID ...] DESCRIPTION The plakar rm command deletes snapshots from a Plakar repository. Snapshots can be filtered for deletion by age, by tag, or by specifying the snapshot IDs to remove. If no snapshotID are provided, either -older or -tag must be specified to filter the snapshots to delete.\nThe arguments are as follows:\n-name name Filter snapshots that match name. -category category Filter snapshots that match category. -environment environment Filter snapshots that match environment. -perimeter perimeter Filter snapshots that match perimeter. -job job Filter snapshots that match job. -tag tag Filter snapshots that match tag. -latest Filter latest snapshot matching filters. -before date Filter snapshots matching filters and older than the specified date. Accepted formats include relative durations (e.g. 2d for two days, 1w for one week) or specific dates in various formats (e.g. 2006-01-02 15:04:05). -since date Filter snapshots matching filters and created since the specified date, included. Accepted formats include relative durations (e.g. 2d for two days, 1w for one week) or specific dates in various formats (e.g. 2006-01-02 15:04:05). EXAMPLES Remove a specific snapshot by ID:\n$ plakar rm abc123 Remove snapshots older than 30 days:\n$ plakar rm -before 30d Remove snapshots with a specific tag:\n$ plakar rm -tag daily-backup Remove snapshots older than 1 year with a specific tag:\n$ plakar rm -before 1y -tag daily-backup DIAGNOSTICS The plakar-rm utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid date format or failure to delete a snapshot. SEE ALSO plakar(1), plakar-backup(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-rm/","section":"Docs","summary":"Remove snapshots from a Plakar repository","title":"rm","type":"docs"},{"content":" PLAKAR-SCHEDULER(1) General Commands Manual PLAKAR-SCHEDULER(1) NAME plakar-scheduler \u0026#x2014; Run the Plakar scheduler\nSYNOPSIS plakar scheduler [-foreground] [start -tasks configfile] [stop] DESCRIPTION The plakar scheduler runs in the background and manages task execution based on the defined schedule.\nThe options are as follows:\n-foreground Run the scheduler in the foreground instead of as a background service. -tasks configfile Specify the configuration file that contains the task definitions and schedules. start -tasks configfile Starts the scheduler service and its tasks from configfile. stop Stop the currently running scheduler service. DIAGNOSTICS The plakar-scheduler utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 An error occurred, such as invalid parameters, inability to create the repository, or configuration issues. SEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-scheduler/","section":"Docs","summary":"Run the Plakar scheduler","title":"scheduler","type":"docs"},{"content":" PLAKAR-SERVER(1) General Commands Manual PLAKAR-SERVER(1) NAME plakar-server \u0026#x2014; Start a Plakar server\nSYNOPSIS plakar server [-allow-delete] [-listen [host]:port] DESCRIPTION The plakar server command starts a Plakar server instance at the provided address, allowing remote interaction with a Kloset store over a network.\nThe options are as follows:\n-allow-delete Enable delete operations. By default, delete operations are disabled to prevent accidental data loss. -listen [host]:port The host and port where to listen to, separated by a colon. The host name is optional, and defaults to all available addresses. If -listen is not provided, the server defaults to listen on localhost at port 9876. EXAMPLES Start a plakar server on the local store:\n$ plakar server Start a plakar server on a remote store:\n$ plakar at sftp://example.org server Start a server on a specific address and port:\n$ plakar server -listen 127.0.0.1:12345 DIAGNOSTICS The plakar-server utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nCAVEATS When a host name is provided, plakar server uses only one of the IP addresses it resolves to, preferably IPv4 .\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-server/","section":"Docs","summary":"Start a Plakar server","title":"server","type":"docs"},{"content":" PLAKAR-SERVICE(1) General Commands Manual PLAKAR-SERVICE(1) NAME plakar-service \u0026#x2014; Manage optional Plakar-connected services\nSYNOPSIS plakar service list plakar service add name [key=value ...] plakar service rm name plakar service status name plakar service show name plakar service enable name plakar service disable name plakar service set name [key=value ...] plakar service unset name [key ...] DESCRIPTION The plakar service command allows you to enable, disable, and inspect additional services that integrate with the plakar platform via plakar-login(1) authentication. These services connect to the plakar.io infrastructure, and should only be enabled if you agree to transmit non-sensitive operational data to plakar.io.\nAll subcommands require prior authentication via plakar-login(1).\nServices are managed by the backend and discovered at runtime. For example, when the \u0026#x201C;alerting\u0026#x201D; service is enable, it will:\nSend email notifications when operations fail. Expose the latest alerting reports in the Plakar UI (see plakar-ui(1)). By default, all services are disabled.\nSUBCOMMANDS list Display the list of available services. add name [key=value ...] Set the configuration for the service identified by name and enable it. The configuration is defined by the given set of key/value pairs. The existing configuration, if any, is discarded. rm name Disable the service identified by name and discard its configuration. status name Display the current status (enabled or disabled) of the named service. show name Display the configuration for the specified service. enable name Enable the specified service. disable name Disable the specified service. set name [key=value ...] Set the configuration key to value for the service identified by name. Multiple key/value pairs can be specified. unset name [key ...] Unset the configuration key for the service identified by name. Multiple keys can be specified. EXAMPLES Check the status of the alerting service:\n$ plakar service status alerting Enable alerting:\n$ plakar service enable alerting Disable alerting:\n$ plakar service disable alerting SEE ALSO plakar-login(1), plakar-ui(1)\nAugust 7, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-service/","section":"Docs","summary":"Manage optional Plakar-connected services","title":"service","type":"docs"},{"content":" PLAKAR-SOURCE(1) General Commands Manual PLAKAR-SOURCE(1) NAME plakar-source \u0026#x2014; Manage Plakar backup source configuration\nSYNOPSIS plakar source subcommand ... DESCRIPTION The plakar source command manages the configuration of data sources for Plakar to backup.\nThe configuration consists in a set of named entries, each of them describing a source for a backup operation.\nA source is defined by at least a location, specifying the importer to use, and some importer-specific parameters.\nThe subcommands are as follows:\nadd name location [option=value ...] Create a new source entry identified by name with the specified location describing the importer to use. Additional importer options can be set by adding option=value parameters. check name Check wether the importer for the source identified by name is properly configured. import [-config location] [-overwrite] [-rclone] [sections ...] Import source configurations from various sources including files, piped input, or rclone configurations. By default, reads from stdin, allowing for piped input from other commands.\nThe -config option specifies a file or URL to read the configuration from.\nThe -overwrite option allows overwriting existing source configurations with the same names.\nThe -rclone option treats the input as an rclone configuration, useful for importing rclone remotes as Plakar sources.\nSpecific sections can be imported by listing their names.\nSections can be renamed during import by appending :newname.\nFor detailed examples and usage patterns, see the https://plakar.io/docs/v1.1.0/guides/importing-configurations/ Importing Configurations guide.\nping name Try to open the data source identified by name to make sure it is reachable. rm name Remove the source identified by name from the configuration. set name [option=value ...] Set the option to value for the source identified by name. Multiple option/value pairs can be specified. show [-secrets] [name ...] Display the current sources configuration. If -secrets is specified, sensitive information such as passwords or tokens will be shown. unset name [option ...] Remove the option for the source entry identified by name. EXIT STATUS The plakar-source utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nSeptember 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-source/","section":"Docs","summary":"Manage Plakar backup source configuration","title":"source","type":"docs"},{"content":" PLAKAR-STORE(1) General Commands Manual PLAKAR-STORE(1) NAME plakar-store \u0026#x2014; Manage Plakar store configurations\nSYNOPSIS plakar store subcommand ... DESCRIPTION The plakar store command manages the Plakar store configurations.\nThe configuration consists in a set of named entries, each of them describing a Plakar store holding backups.\nA store is defined by at least a location, specifying the storage implementation to use, and some storage-specific parameters.\nThe subcommands are as follows:\nadd name location [option=value ...] Create a new store entry identified by name with the specified location. Specific additional configuration parameters can be set by adding option=value parameters. check name Check wether the store identified by name is properly configured. import [-config location] [-overwrite] [-rclone] [sections ...] Import store configurations from various sources including files, piped input, or rclone configurations. By default, reads from stdin, allowing for piped input from other commands.\nThe -config option specifies a file or URL to read the configuration from.\nThe -overwrite option allows overwriting existing store configurations with the same names.\nThe -rclone option treats the input as an rclone configuration, useful for importing rclone remotes as Plakar stores.\nSpecific sections can be imported by listing their names.\nSections can be renamed during import by appending :newname.\nFor detailed examples and usage patterns, see the https://plakar.io/docs/v1.1.0/guides/importing-configurations/ Importing Configurations guide.\nping name Try to connect to the store identified by name to make sure it is reachable. rm name Remove the store identified by name from the configuration. set name [option=value ...] Set the option to value for the store identified by name. Multiple option/value pairs can be specified. show [-secrets] [name ...] Display the current stores configuration. If -secrets is specified, sensitive information such as passwords or tokens will be shown. unset name [option ...] Remove the option for the store entry identified by name. DIAGNOSTICS The plakar-store utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\nSEE ALSO plakar(1)\nSeptember 11, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-store/","section":"Docs","summary":"Manage Plakar store configurations","title":"store","type":"docs"},{"content":" PLAKAR-SYNC(1) General Commands Manual PLAKAR-SYNC(1) NAME plakar-sync \u0026#x2014; Synchronize snapshots between Plakar repositories\nSYNOPSIS plakar sync [-packfiles path] [snapshotID] to | from | with repository DESCRIPTION The plakar sync command synchronize snapshots between two Plakar repositories. If a specific snapshot ID is provided, only snapshots with matching IDs will be synchronized.\nplakar sync supports the location flags documented in plakar-query(7) to precisely select snapshots.\nThe options are as follows:\n-packfiles path Path where to put the temporary packfiles instead of building them in the default temporary directory. If the special value \u0026#x2018;memory\u0026#x2019; is specified then the packfiles are build in memory. The arguments are as follows:\nto | from | with Specifies the direction of synchronization: to Synchronize snapshots from the local repository to the specified peer repository. from Synchronize snapshots from the specified peer repository to the local repository. with Synchronize snapshots in both directions, ensuring both repositories are fully synchronized. repository Path to the peer repository to synchronize with. EXAMPLES Synchronize the snapshot \u0026#x2018;abcd\u0026#x2019; with a peer repository:\n$ plakar sync abcd to @peer Bi-directional synchronization with peer repository of recent snapshots:\n$ plakar sync -since 7d with @peer Synchronize all snapshots of @peer to @repo:\n$ plakar at @repo sync from @peer DIAGNOSTICS The plakar-sync utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 General failure occurred, such as an invalid repository path, snapshot ID mismatch, or network error. SEE ALSO plakar(1), plakar-query(7)\nSeptember 10, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-sync/","section":"Docs","summary":"Synchronize snapshots between Plakar repositories","title":"sync","type":"docs"},{"content":" PLAKAR-TOKEN(1) General Commands Manual PLAKAR-TOKEN(1) NAME plakar-token \u0026#x2014; Manage Plakar tokens\nSYNOPSIS plakar token [create] DESCRIPTION The plakar token command manages tokens used to authenticate to Plakar services. Set the PLAKAR_TOKEN environment variable to use a token, and see plakar-login(1) to persist it in the configuration.\nSUBCOMMANDS create Create a new token. SEE ALSO plakar(1), plakar-login(1)\nDecember 9, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-token/","section":"Docs","summary":"Manage Plakar tokens","title":"token","type":"docs"},{"content":" PLAKAR-UI(1) General Commands Manual PLAKAR-UI(1) NAME plakar-ui \u0026#x2014; Serve the Plakar web user interface\nSYNOPSIS plakar ui [-addr address] [-cors] [-no-auth] [-no-spawn] DESCRIPTION The plakar ui command serves the Plakar web user interface. By default, it opens the default web browser.\nThe options are as follows:\n-addr address Specify the address and port for the UI to listen on separated by a colon, (e.g. localhost:8080). If omitted, plakar ui listens on localhost on a random port. -cors Set the \u0026#x2018;Access-Control-Allow-Origin\u0026#x2019; HTTP headers to allow the UI to be accessed from any origin. -no-auth Disable the authentication token that otherwise is needed to consume the exposed HTTP APIs. -no-spawn Do not automatically open the web browser. EXAMPLES Using a custom address and disable automatic browser execution:\n$ plakar ui -addr localhost:9090 -no-spawn DIAGNOSTICS The plakar-ui utility exits\u0026#x00A0;0 on success, and\u0026#x00A0;\u0026gt;0 if an error occurs.\n0 Command completed successfully. \u0026gt;0 A general error occurred, such as an inability to launch the UI or bind to the specified address. SEE ALSO plakar(1)\nAugust 6, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-ui/","section":"Docs","summary":"Serve the Plakar web user interface","title":"ui","type":"docs"},{"content":" PLAKAR-VERSION(1) General Commands Manual PLAKAR-VERSION(1) NAME plakar-version \u0026#x2014; Display the current Plakar version\nSYNOPSIS plakar version DESCRIPTION The plakar version command displays the current version of Plakar.\nSEE ALSO plakar(1)\nJuly 3, 2025 Plakar ","date":"24 March 2026","externalUrl":null,"permalink":"/docs/v1.1.0/references/commands/plakar-version/","section":"Docs","summary":"Display the current Plakar version","title":"version","type":"docs"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/apple/","section":"Tags","summary":"","title":"Apple","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/archives/","section":"Tags","summary":"","title":"Archives","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/automation/","section":"Tags","summary":"","title":"Automation","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/aws/","section":"Tags","summary":"","title":"Aws","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/backblaze/","section":"Tags","summary":"","title":"Backblaze","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/bsd/","section":"Tags","summary":"","title":"BSD","type":"tags"},{"content":" Why protecting CalDAV matters # Calendar data has become a critical infrastructure for personal and professional life. Meetings, appointments, deadlines, and event history represent important scheduling information that organizations and individuals depend on daily.\nCalDAV provides reliable synchronization across devices, but synchronization is not the same as backup. Standard CalDAV usage faces several risks:\nAccidental Deletion: Events can be permanently deleted across all synchronized devices instantly. Limited Recovery: Most providers offer minimal version history or trash functionality with short retention windows. Calendar Corruption: Malformed events or sync errors can corrupt entire calendars. Security and Compromise # CalDAV servers are accessed via account credentials and application-specific passwords. If credentials are compromised or permissions are misconfigured, your calendar data is vulnerable:\nMass Deletion: Unauthorized access can delete years of event history instantly. Silent Modification: Compromised accounts can alter meeting times, locations, or attendee lists without detection. Sync Propagation: Malicious changes spread automatically to all connected devices. Account Lockout: Lost credentials or provider issues can leave you unable to access your own calendar data. Without independent snapshots, recovering from these events requires manual reconstruction from scattered sources. Plakar solves this by creating immutable snapshots that exist outside your CalDAV infrastructure. Even if your calendar server is compromised, your backup history remains intact and independently verifiable.\nHow Plakar secures your CalDAV workflows # You can use Plakar\u0026rsquo;s CalDAV integration as:\nSource Connector: Capture complete snapshots of your calendar events and store them in a secure Kloset Store. Destination Connector: Restore calendar data as .ics format from a backup to your CalDAV server. This approach provides several advantages over native calendar exports:\nAutomated Scheduling: Run backups on schedule without manual intervention. Complete Fidelity: All event metadata, recurrence rules, attendees, and timestamps are preserved. Deduplication: All calendar data and metadata are deduplicated to minimize storage. Point-in-Time Recovery: Restore your calendar to any previous backup snapshot. Cross-Provider Migration: Move calendar data between different CalDAV providers seamlessly. What Plakar backs up # Events: Complete event details including title, description, location, and time Recurrence Rules: Repeating event patterns and exception dates Attendees: Participant lists with email addresses and response status Metadata: Creation dates, last modified timestamps, organizer information, UIDs Alarms: Reminder and notification settings Attachments: Document and file references Time Zones: Complete timezone information for accurate scheduling across regions Current Limitations # The CalDAV integration is in beta and has some known limitations:\nBulk Operations: All accessible calendars are backed up together; per-calendar selection is not yet supported. Filtering: Time-based or event-type filtering during backup is not yet available. OAuth2 Providers: Services requiring OAuth2 authentication (like native Google Calendar) require third-party gateway configuration. Write Permissions: Restoration requires write access to the target CalDAV server. ","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/caldav/","section":"Plakar Integrations","summary":"","title":"CalDAV","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/caldav/","section":"Tags","summary":"","title":"CalDAV","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/calendar/","section":"Tags","summary":"","title":"Calendar","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/ceph/","section":"Tags","summary":"","title":"Ceph","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/cloud-storage/","section":"Tags","summary":"","title":"Cloud Storage","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/collaboration/","section":"Tags","summary":"","title":"Collaboration","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/database-dumps/","section":"Tags","summary":"","title":"Database Dumps","type":"tags"},{"content":" Why protecting Dropbox matters # Dropbox is excellent for syncing and sharing files instantly, but syncing is not the same as a backup. Because Dropbox mirrors your actions across all devices, any mistake or error can spread everywhere at once:\nInstant Deletion: If you or a collaborator accidentally delete a folder, that deletion spreads to every linked computer immediately. Limited Recovery: Dropbox has a specific \u0026ldquo;trash\u0026rdquo; window, but if data loss is discovered after that window closes, those files may be permanently gone. Ransomware: If your local files are hit by malware, Dropbox will sync the versions encrypted by malware, overwriting your healthy data in the cloud. For your important projects and shared team folders, you need an independent record of your data that remains safe, verifiable, and restorable no matter what happens in your live Dropbox environment.\nDropbox Shared Responsibility Model # Dropbox operates under a shared responsibility model where Dropbox secures the infrastructure, you\u0026rsquo;re responsible for protecting your data. Plakar ensures you meet your side with independent, verifiable backups.\nTo learn more about the general idea about the shared responsibility model you can check the docs on why you should backup your SaaS.\nSecurity and Compromise # Dropbox relies on account credentials that can be targeted or lost through simple mistakes. If an account is compromised or a connected app behaves unexpectedly:\nMass Data Loss: Unauthorized access can result in the deletion or corruption of years of data in seconds. Synchronization Issues: Malicious or accidental changes can be replicated instantly across your entire team. No Manual Recovery: Once data is deleted and the trash window has expired, there is often no way to recover the lost information. Plakar creates immutable snapshots of your data that are end-to-end encrypted with keys that you own. This ensures your data remains private and secure even if your Dropbox account security is ever compromised.\nPlakar also allows for direct inspection of your backups. You can easily browse, search, or verify the integrity of your history via the CLI or UI without needing to perform a full restore first.\nHow Plakar secures your Dropbox workflows # Plakar integrates with Dropbox as a flexible bridge for your data in these ways:\nSource Connector: Take snapshots of your Dropbox data and backup to a secure Kloset Store. Storage Connector: Use Dropbox as the \u0026ldquo;vault\u0026rdquo; to store your encrypted and deduplicated Plakar backups from other sources. Destination Connector: Restore verified snapshots directly back into your Dropbox account exactly when you need them. Plakar uses deduplication to minimize storage space and bandwidth usage while preserving full snapshot history.\n","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/dropbox/","section":"Plakar Integrations","summary":"","title":"Dropbox","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/dropbox/","section":"Tags","summary":"","title":"Dropbox","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/email-backup/","section":"Tags","summary":"","title":"Email Backup","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/exchange/","section":"Tags","summary":"","title":"Exchange","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/ext4/","section":"Tags","summary":"","title":"EXT4","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/fastmail/","section":"Tags","summary":"","title":"Fastmail","type":"tags"},{"content":" Why protecting your Filesystem matters # Most of us protect our files by simply copying them to an external drive or a USB stick. While having a second copy is a good start, it isn\u0026rsquo;t a real backup strategy. Standard file copying leaves your data vulnerable to a few common problems:\nAccidental Deletion: If you delete a file by mistake and your sync tool mirrors that change, the file is gone from both places. Ransomware and Corruption: If your files are hit by malware or a disk failure, \u0026ldquo;copying\u0026rdquo; those files usually just preserves the damaged versions. Human Error: It is incredibly easy to overwrite a new version of a document with an old one during a manual move. For your personal photos, system settings, or work projects, you need a way to go back to a specific version of your data that you know is safe and healthy in case of a mishap with your current data.\nSecurity and Integrity # Local drives are the most common place for data to go missing. Whether it\u0026rsquo;s a hard drive finally giving up or a misspelled command, local data is fragile.\nWith Plakar, every backup is a fixed snapshot that cannot be changed or overwritten once created. If your files are later deleted, damaged, or encrypted by malware, you can always return to a known-good version. All backups are end-to-end encrypted to protect your data.\nPlakar also allows for direct inspection of these backups. You can easily browse, search, or verify that your data is safe via the CLI or UI without needing to perform a full restore first.\nHow Plakar secures your Filesystem files # Plakar acts as a bridge for your local data, allowing you to move and protect it seamlessly:\nSource Connector: Take snapshots of any directory on your computer, local hard drives, or mounted NAS and SAN volumes. Storage Connector: Use any local folder or external drive as the \u0026ldquo;vault\u0026rdquo; (Kloset store) to hold your encrypted and deduplicated backups. Destination Connector: Restore your files exactly where they belong, or to an entirely new location, with all original permissions and timestamps intact. Common Questions # 1. Does Plakar keep file details?\nYes. When you restore a file, Plakar brings back the original permissions, timestamps, and ownership, so your files look and act exactly as they did before.\n2. How does Plakar handle symlinks?\nPlakar backs up symlinks as they are. It records the link itself rather than the file it points to, which keeps your backups from growing unexpectedly.\n3. Do you store extended attributes?\nYes, Plakar preserves extended attributes (xattrs) of files and directories.\n","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/fs/","section":"Plakar Integrations","summary":"","title":"Filesystem","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/filesystem/","section":"Tags","summary":"","title":"Filesystem","type":"tags"},{"content":" Why protecting FTP servers matters # FTP is a standard protocol for file transfers across public repositories, network appliances, and production systems. However, file transfer is not the same as a backup strategy.\nWhile FTP moves files between systems, it does not protect the data once it arrives. Standard FTP setups face several risks:\nLack of Versioning: Files can be overwritten or deleted immediately after upload. No Immutability: There are no permanent snapshots to revert to if a file is corrupted. Operational Risk: Manual transfers or custom scripts are error-prone and difficult to audit. Simply storing files via FTP is not enough when compliance, uptime, or disaster recovery is critical. You need verifiable, immutable backups that can survive mistakes, misconfigurations, or unauthorized access.\nSecurity and Compromise # If FTP credentials are leaked or an account is compromised, your data is at risk:\nData Loss: Unauthorized access can be used to delete or overwrite entire directories instantly. Corruption: Malicious actors or faulty scripts can modify live data on the server. Synchronization Issues: Automated sync tools can unintentionally spread corruption from one server to another. Without independent snapshots, recovery from these events can be impossible. Plakar closes this gap by providing a system where every snapshot acts as an immutable version that cannot be altered. This ensures your history remains intact even if the FTP server itself is compromised.\nPlakar allows for direct inspection of backups, letting you easily browse, search, or verify the integrity of your data via the CLI or UI without needing to perform a full restore first.\nHow Plakar secures your FTP workflows # Plakar can integrate with FTP servers as:\nSource Connector: Take snapshots of files located on a remote FTP server and bring them into a secure Plakar Kloset. Destination Connector: Restore your snapshots to any FTP server in your environment. Viewer: Browse and verify FTP-sourced backups without performing a full restore. Plakar ensures your FTP-based infrastructure remains resilient, secure, and verifiable from creation to recovery.\nNote: The FTP integration preserves all metadata exposed by the FTP server. Metadata availability varies between FTP server implementations, with some providing full file attributes and others exposing only basic information.\n","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/ftp/","section":"Plakar Integrations","summary":"","title":"FTP","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/ftp/","section":"Tags","summary":"","title":"FTP","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/gmail/","section":"Tags","summary":"","title":"Gmail","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/google-cloud-storage/","section":"Tags","summary":"","title":"Google Cloud Storage","type":"tags"},{"content":" Why protecting Google Drive matters # Google Drive is central to everyday work and collaboration, but synchronization is not a backup solution. Actions taken in Google Drive are synced to all connected devices almost instantly. This leaves it vulnerable to:\nAccidental Deletion: Files removed by a user are quickly removed across all devices and shared workspaces. Overwrites and Corruption: Bad edits or corrupted files replace healthy versions across the entire environment. Ransomware: Malware-encrypted files are synchronized back to Google Drive, overwriting clean data. Native retention and recovery options are limited in scope and duration. For business‑critical or compliance‑sensitive data, an independent and immutable backup history is essential.\nGoogle Drive Shared Responsibility Model # Google drive operates under a shared responsibility model where Google secures the infrastructure, you\u0026rsquo;re responsible for protecting your data. Plakar ensures you meet your side with independent, verifiable backups.\nTo learn more about the general idea about the shared responsibility model you can check the docs on Why you should backup your SaaS.\nSecurity and compromise # Access to Google Drive is tied to user accounts, credentials, and connected applications. If any of these are compromised:\nMass data loss can occur within minutes Malicious changes are synchronized automatically Recovery windows may be limited or unavailable Plakar mitigates these risks by creating immutable snapshots of your data that cannot be altered or deleted. Backups are encrypted end‑to‑end, with keys that you own, ensuring privacy and control even if the Google account itself is compromised.\nHow Plakar secures your Google Drive workflows # Plakar integrates with Google Drive as a flexible bridge for your data:\nSource Connector: Take snapshots of your Google Drive files and store them in a secure Kloset Store. Storage Connector: Use Google Drive as a vault to store encrypted and deduplicated Plakar backups from other sources. Destination Connector: Restore verified snapshots directly back into Google Drive when needed. Plakar uses deduplication to minimize storage space and bandwidth usage while preserving full snapshot history.\n","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/googledrive/","section":"Plakar Integrations","summary":"","title":"Google Drive","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/google-drive/","section":"Tags","summary":"","title":"Google Drive","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/icloud/","section":"Tags","summary":"","title":"ICloud","type":"tags"},{"content":" Why protecting iCloud Drive matters # iCloud Drive is central to the Apple ecosystem, seamlessly syncing files across devices, however synchronization doesn\u0026rsquo;t count as a backup solution. Actions taken in iCloud Drive are synced to all connected devices almost instantly. This leaves it vulnerable to:\nAccidental Deletion: Files removed by a user are quickly removed across all devices. Overwrites and Corruption: Bad edits or corrupted files replace healthy versions across the entire environment. Ransomware: Malware-encrypted files are synchronized back to iCloud Drive, overwriting clean data. Native retention and recovery options are limited in scope and duration. While Apple provides some version history, it\u0026rsquo;s restricted to recent changes and specific file types. For business-critical or compliance-sensitive data, an independent and immutable backup history is essential.\nSecurity and compromise # Access to iCloud Drive is tied to Apple ID credentials, two-factor authentication, and connected devices. If any of these are compromised:\nMass data loss can occur within minutes Malicious changes are synchronized automatically across all devices Recovery windows may be limited or unavailable Plakar mitigates these risks by creating immutable snapshots of your data that cannot be altered. Backups are end-to-end encrypted, with keys that you own, ensuring privacy and control even if your Apple ID itself is compromised.\nHow Plakar secures your iCloud Drive workflows # Plakar integrates with iCloud Drive as a flexible bridge for your data:\nSource Connector: Take snapshots of your iCloud Drive files and store them in a secure Kloset Store. Storage Connector: Use iCloud Drive as a vault to store encrypted and deduplicated Plakar backups from other sources. Destination Connector: Restore verified snapshots directly back into iCloud Drive when needed. Plakar uses deduplication to minimize storage space and bandwidth usage while preserving full snapshot history. It also allows for direct inspection of backups, letting you browse, search, and verify file content via the CLI or UI without needing to restore to iCloud Drive first.\nCurrent Limitations # The iCloud Drive integration is in beta and has some known limitations:\niCloud Photos: The iCloud Drive API does not provide access to photos stored in iCloud Photos. You cannot back up your iCloud Photos library with this integration. App-Specific Data: Some app containers and system-managed data may not be accessible through the standard iCloud Drive API. ","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/iclouddrive/","section":"Plakar Integrations","summary":"","title":"iCloud Drive","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/icloud-drive/","section":"Tags","summary":"","title":"ICloud Drive","type":"tags"},{"content":" Why protecting IMAP matters # Email is often treated as inherently persistent, but IMAP mailboxes are vulnerable to both accidental and malicious data loss. Standard IMAP setups face several critical risks:\nAccidental Deletion: Users can permanently delete entire folders with a few clicks. Most mail clients offer no meaningful undo beyond a trash folder that itself can be emptied. Account Compromise: If credentials are stolen or phished, attackers can delete years of correspondence instantly. Business email compromise attacks specifically target mailboxes to destroy evidence or communication history. Retention Policy Gaps: Server-side policies may automatically delete old messages. Quota pressures can force users to delete important emails. These deletions happen silently and often without adequate warning. Synchronization Cascades: IMAP synchronizes deletions across all connected devices immediately. A mistake on one device propagates everywhere, leaving no local copy to recover from. No Version History: Unlike documents or files, emails have no built-in versioning. Once modified or deleted on the server, the original is gone unless you have an independent backup. Emails often contain business records, compliance data, legal correspondence, or irreplaceable personal history and relying solely on the mail server is not enough. You need verifiable, immutable snapshots that exist independently of your IMAP account.\nSecurity and Compromise # IMAP access is controlled by usernames and passwords or, in some cases, OAuth tokens. These credentials are frequently the target of phishing attacks, credential stuffing, or social engineering.\nIf an IMAP account is compromised:\nTotal Mailbox Wipe: Attackers can delete all messages, folders, and archived mail in seconds through any IMAP client or automated script. Evidence Destruction: Business email compromise (BEC) attackers specifically delete sent mail and correspondence to cover their tracks after fraudulent transactions. Ransomware Encryption: While less common than file encryption, some attacks target cloud-stored email, making messages inaccessible without payment. No Server-Side Recovery: Most mail providers offer limited or no recovery options for bulk deletions. Even when recovery exists, it is often time-limited (7-30 days) and may not restore folder structures or all metadata. Compliance Violations: For organizations subject to regulatory requirements, the loss of email records can result in significant legal and financial consequences. Plakar mitigates these risks by creating immutable snapshots outside the live IMAP scope. With end-to-end encryption, your backed-up emails remain private and secure even if your mail server or storage backend is accessed by unauthorized parties.\nHow Plakar secures your IMAP workflows # Plakar integrates with any IMAP-compatible mail server as a flexible backup and recovery solution:\nSource Connector: Backup mailboxes from any IMAP server. Plakar encrypts and deduplicates mail content and saves it to a secure Kloset Store, creating an independent backup layer that survives account compromise or server failures. Destination Connector: Restore snapshots back into any IMAP mailbox, whether to your original account, a different mail server, or a fresh mailbox for migration purposes. This enables multiple backup and recovery strategies:\nProtect corporate mailboxes from accidental deletion or malicious attacks Archive compliance-critical email outside your production mail infrastructure Migrate mailboxes between providers while preserving complete history Maintain air-gapped copies of sensitive correspondence Separate backup credentials from production email access for improved security Plakar works with any IMAP-compatible server, including Gmail, Office 365, Exchange, Dovecot, Zimbra, and self-hosted mail systems.\nPlakar also allows direct inspection of email backups. You can browse, search, or verify the integrity of your mailbox snapshots via the CLI or UI without needing to perform a full restore first, saving time and avoiding disruption to your live mail environment.\n","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/imap/","section":"Plakar Integrations","summary":"","title":"IMAP","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/imap/","section":"Tags","summary":"","title":"IMAP","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/infomaniak/","section":"Tags","summary":"","title":"Infomaniak","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/knowledge-management/","section":"Tags","summary":"","title":"Knowledge Management","type":"tags"},{"content":" Why protecting Koofr matters # Koofr provides privacy-focused cloud storage with multi-cloud integration, but these features are not a backup solution. Actions taken in Koofr can affect your files instantly. This leaves it vulnerable to:\nAccidental Deletion: Files removed by a user are permanently deleted from your storage. Overwrites and Corruption: Bad edits or corrupted files replace healthy versions. Ransomware: Malware-encrypted files can overwrite clean data in your Koofr storage. Native retention and recovery options are limited in scope and duration. For business-critical or compliance-sensitive data, an independent and immutable backup history is essential.\nSecurity and compromise # Access to Koofr is tied to user credentials and connected applications. If any of these are compromised:\nMass data loss can occur within minutes Malicious changes can affect all your files Recovery windows may be limited or unavailable Plakar mitigates these risks by creating immutable snapshots of your data that cannot be altered or deleted. Backups are encrypted end-to-end, with keys that you own, ensuring privacy and control even if your Koofr account itself is compromised.\nHow Plakar secures your Koofr workflows # Plakar integrates with Koofr as a flexible bridge for your data:\nSource Connector: Take snapshots of your Koofr files and store them in a secure Kloset Store. Storage Connector: Use Koofr as a vault to store encrypted and deduplicated Plakar backups from other sources. Destination Connector: Restore verified snapshots directly back into Koofr when needed. Plakar uses deduplication to minimize storage space and bandwidth usage while preserving full snapshot history. Plakar also allows for direct inspection of backups, letting you browse, search, and verify file content via the CLI or UI without needing to restore to Koofr first.\n","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/koofr/","section":"Plakar Integrations","summary":"","title":"Koofr","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/koofr/","section":"Tags","summary":"","title":"Koofr","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/linux/","section":"Tags","summary":"","title":"Linux","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/mail/","section":"Tags","summary":"","title":"Mail","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/microsoft/","section":"Tags","summary":"","title":"Microsoft","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/migration/","section":"Tags","summary":"","title":"Migration","type":"tags"},{"content":" Why protecting MinIO matters # Object storage is often perceived as durable by default, but durability is not the same as a backup. Many organizations use MinIO to store logs, datasets, container images, or even other backup files, assuming they are safe.\nHowever, without independent immutability and integrity validation, data stored in MinIO is still vulnerable to:\nSilent Corruption: Data can degrade over time without being noticed. Accidental Deletion: Misconfigured lifecycle policies can delete important data automatically. Access Mismanagement: If security keys are leaked or misused, your entire data store can be wiped or altered. When retention and recovery are mission-critical, simply storing objects is not enough. You need a verifiable backup strategy.\nSecurity and Compromise # MinIO relies on access and secret keys for authentication. Because these keys are often shared across different services or scripts, they represent a significant security risk. If these credentials are compromised:\nTotal Loss: Attackers can delete or overwrite entire buckets. Automated Damage: Malicious changes can be replicated instantly across your environment. No Recovery: Unless an independent backup exists, there is no way to \u0026ldquo;undo\u0026rdquo; a deletion in a standard object store. Plakar mitigates this risk by providing immutable snapshots stored outside the standard MinIO access scope and end-to-end encrypted so it keeps your data private even if the storage backend is accessed by an unauthorized party.\nPlakar also allows for direct inspection of backups, you can easily browse, search, or verify the integrity of your data via the CLI or UI without needing to perform a full restore first.\nHow Plakar secures your MinIO workflows # Plakar integrates with MinIO as a flexible bridge, allowing you to move data securely in either direction:\nSource Connector: Take snapshots of one or multiple MinIO buckets. Plakar encrypts and deduplicates the content before saving it to a trusted Kloset store. Destination Connector: Restore verified snapshots back into MinIO, whether on-premise or in the cloud, in a format that matches your original environment. Use MinIO as your Backup Vault # MinIO is also a powerful destination for your Plakar snapshots. By using MinIO as a Kloset storage backend, you can store encrypted, deduplicated, and versioned backups from any source:\nDatabases: Secure your PostgreSQL, MySQL, or MongoDB exports. Systems: Backup file systems from servers, workstations, or NAS devices. Applications: Store exports from containers or cloud applications. Plakar uses deduplication which significantly reduces the amount of storage space and bandwidth needed as your MinIO-based backup library grows.\nPlakar ensures that MinIO works as both a secure source and a trusted storage backend for your entire infrastructure.\n","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/minio/","section":"Plakar Integrations","summary":"","title":"MinIO","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/minio/","section":"Tags","summary":"","title":"MinIO","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/nas/","section":"Tags","summary":"","title":"NAS","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/network-appliances/","section":"Tags","summary":"","title":"Network Appliances","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/nextcloud/","section":"Tags","summary":"","title":"Nextcloud","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/nfs/","section":"Tags","summary":"","title":"NFS","type":"tags"},{"content":" Why protecting Notion matters # Notion has become central to how organizations document processes, manage projects, and preserve institutional knowledge. As more critical information moves into Notion, the risks of data loss grow proportionally.\nNotion provides platform availability and collaboration features, but it does not guarantee recovery from common failure scenarios. Standard Notion usage faces several risks:\nAccidental Deletion: Pages and databases can be permanently deleted by any user with edit access. Unauthorized Changes: API integrations, compromised accounts, or malicious actors can modify or destroy content. Limited Version History: Notion\u0026rsquo;s native version history has retention limits and cannot protect against all scenarios. Export Limitations: Native exports are one-time snapshots in generic formats that lose structure and relationships. Security and Compromise # Notion workspaces are accessed via user accounts, API tokens, and third-party integrations. If credentials are compromised or permissions are misconfigured, your workspace is vulnerable:\nMass Deletion: A compromised account can delete entire page hierarchies instantly. Silent Corruption: Automated integrations can overwrite critical data without detection. API Misuse: Leaked API tokens allow external actors to read, modify, or destroy workspace content. Cascading Changes: Mistakes in shared databases propagate across all connected pages. Without independent snapshots, recovering from these events requires manual reconstruction or reliance on Notion\u0026rsquo;s limited version history. Plakar solves this by creating immutable snapshots that exist outside your Notion workspace. Even if your entire workspace is compromised, your backup history remains intact and independently verifiable.\nPlakar allows for direct inspection of backups, letting you browse, search, and verify workspace content via the CLI or UI without needing to restore to Notion first.\nHow Plakar secures your Notion workspace # Plakar connects to your Notion workspace via the official Notion API and creates cryptographically signed, deduplicated snapshots of your content. Each backup captures the complete structure of your workspace, including pages, databases, attachments, and comments.\nYou can use Plakar\u0026rsquo;s Notion integration as:\nSource Connector: Capture complete snapshots of your Notion workspace and store them in a secure Kloset Store. This approach provides several advantages over native Notion exports:\nStructured Representation: Plakar preserves the internal structure of pages and databases, not just rendered output. Deduplication: All content is deduplicated before storage to minimize storage usage. Point-in-Time Recovery: Restore your workspace to any previous backup snapshot. Verification: Validate backup integrity without accessing Notion. What Plakar backs up # Plakar captures comprehensive workspace data through the Notion API:\nPages: Full content, structure, and block-level details Databases: Tables, boards, lists, galleries with all properties and views Media: Images, documents, PDFs, and embedded files Comments: Discussion threads and annotations Relationships: Parent-child hierarchies and database connections Metadata: Creation dates, authors, and modification history Plakar allows for direct inspection of backups, you can easily browse, search, or verify the integrity of your data via the CLI or UI without needing to perform a full restore first.\nCurrent Limitations # The Notion integration is in beta and has some known limitations:\nPermission Model: The integration must be manually shared with each top-level page. Pages not explicitly shared will not be backed up, even if they are linked from shared pages. Block Compatibility: Some third-party or custom Notion blocks may not serialize perfectly. Core Notion blocks are fully supported. Media Restoration: Due to current Notion API limitations, media files (images, documents) cannot be restored directly to Notion. You can restore media to the filesystem and manually re-upload. We are actively working on a solution for this. Restoration Target: Restoring to Notion requires a valid Notion Page ID as the destination. You cannot create new top-level pages via the API. ","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/notion/","section":"Plakar Integrations","summary":"","title":"Notion","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/notion/","section":"Tags","summary":"","title":"Notion","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/ntfs/","section":"Tags","summary":"","title":"NTFS","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/object-storage/","section":"Tags","summary":"","title":"Object Storage","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/office-365/","section":"Tags","summary":"","title":"Office 365","type":"tags"},{"content":" Why protecting OneDrive matters # OneDrive\u0026rsquo;s seamless integration with Microsoft 365 makes it indispensable for modern work, but this same integration creates unique risks. When something goes wrong—whether accidental deletion, file corruption, or ransomware—the problem spreads instantly across every connected device and user. This leaves OneDrive environments vulnerable to:\nSynchronized Deletions: A file deleted on one device disappears from OneDrive and all other devices within seconds. Rapid Corruption Spread: Corrupted or maliciously encrypted files replace healthy versions across your entire organization. Credential-Based Attacks: A single compromised account can be used to delete or encrypt large portions of your shared files. While OneDrive provides a recycle bin and version history, these features have time limits and may not protect against determined attackers or cascading failures. For regulated industries or business-critical data, these built-in protections are insufficient.\nSecurity and compromise # OneDrive\u0026rsquo;s integration with Microsoft accounts, Active Directory, and third-party apps creates multiple potential entry points for attackers. When credentials are compromised:\nAttackers can delete entire folder structures in minutes Ransomware can encrypt files and sync those encrypted versions to the cloud before detection OAuth token theft can give persistent access even after password changes Traditional version history won\u0026rsquo;t help if an attacker systematically deletes old versions or if the retention period has passed. Plakar creates snapshots that exist outside your OneDrive environment entirely, protected by separate encryption keys that you control. Even if your entire Microsoft 365 tenant is compromised, your backup history remains intact.\nHow Plakar secures your OneDrive workflows # Plakar connects to OneDrive through Microsoft\u0026rsquo;s secure APIs and provides multiple integration points:\nSource Connector: Capture snapshots of your OneDrive files and store them in an independent Kloset Store, completely separate from your Microsoft environment. Storage Connector: Leverage OneDrive\u0026rsquo;s storage capacity to hold encrypted Plakar backups from other systems, creating a multi-layered backup strategy. Destination Connector: Restore files or entire folder structures back to OneDrive when recovery is needed. Plakar uses deduplication to minimize storage space and bandwidth usage while preserving full snapshot history. Plakar also allows for direct inspection of backups, letting you browse, search, and verify file content via the CLI or UI without needing to restore to OneDrive first.\n","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/onedrive/","section":"Plakar Integrations","summary":"","title":"OneDrive","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/onedrive/","section":"Tags","summary":"","title":"OneDrive","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/opendrive/","section":"Tags","summary":"","title":"Opendrive","type":"tags"},{"content":" Why protecting OpenDrive matters # OpenDrive excels at synchronizing files and making them accessible across devices. But this doesn\u0026rsquo;t count as a backup strategy. Your files are still vulnerable to:\nAccidental Deletion: Deleted files are synced instantly and may be permanently removed once retention limits are reached. Overwrites and Corruption: Bad edits or corrupted files replace healthy versions across all devices. Ransomware: Encrypted files created by malware are synced back to OpenDrive, overwriting clean data. For important personal data, shared folders, or business assets, you need an independent history of your files that cannot be altered by mistakes, malware, or account issues.\nSecurity and compromise # OpenDrive access is tied to user credentials and connected devices. If those are lost, misused, or compromised:\nMass data loss can happen in minutes Malicious changes are synchronized automatically Recovery windows may be limited or unavailable Plakar protects against these scenarios by creating encrypted snapshots that cannot be modified. Encryption keys are owned by you, ensuring that your backups remain private and secure even if the OpenDrive account itself is compromised.\nPlakar also allows direct inspection of your backups. You can browse, search, and verify snapshot contents through the CLI or UI without performing a full restore.\nHow Plakar secures your OpenDrive workflows # Plakar integrates with OpenDrive as a flexible bridge for your data:\nSource Connector: Take snapshots of your OpenDrive files and store them in a secure Plakar Kloset. Storage Connector: Use OpenDrive as a vault to store encrypted and deduplicated Plakar backups from other sources. Destination Connector: Restore verified snapshots back into OpenDrive when needed. Plakar uses deduplication to minimize storage usage and bandwidth while preserving full snapshot history. This approach ensures your OpenDrive data remains resilient, verifiable, and easily recoverable.\n","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/opendrive/","section":"Plakar Integrations","summary":"","title":"OpenDrive","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/ovh/","section":"Tags","summary":"","title":"OVH","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/pipelines/","section":"Tags","summary":"","title":"Pipelines","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/portability/","section":"Tags","summary":"","title":"Portability","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/privacy/","section":"Tags","summary":"","title":"Privacy","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/productivity/","section":"Tags","summary":"","title":"Productivity","type":"tags"},{"content":" Why protecting Proton Drive matters # Proton Drive provides strong privacy through end-to-end encryption, but like any cloud storage platform, it faces risks from accidental deletion, device compromise, and sync propagation:\nInstant Deletion: If you or a collaborator accidentally delete a folder, that deletion spreads to every synced device immediately. Limited Recovery Window: Proton Drive provides a trash retention period, but if data loss is discovered after that window closes, those files may be permanently gone. Ransomware: If your local files are encrypted by malware, Proton Drive will sync those corrupted versions, overwriting your healthy data in the cloud. For important documents and sensitive files, an independent backup provides an additional safety layer that remains available regardless of what happens in your live Proton Drive environment.\nSecurity and Compromise # Cloud storage accounts can be compromised through credential leaks, phishing, or device theft. When an account is accessed without authorization:\nMass Deletion: Unauthorized access can result in the deletion of files across all synced devices. Synchronization Issues: Malicious or accidental changes replicate instantly across your environment. Limited Recovery Options: Once the trash retention period expires, recovery becomes difficult or impossible. Plakar creates immutable snapshots that exist outside your Proton Drive sync scope. These backups are encrypted and remain intact even if your Proton Drive account is compromised.\nHow Plakar secures your Proton Drive workflows # Plakar integrates with Proton Drive as a flexible bridge for your data:\nSource Connector: Take snapshots of your Proton Drive data and store them in a secure Kloset Store. Storage Connector: Use Proton Drive as storage for your encrypted and deduplicated Plakar backups from other sources. Destination Connector: Restore verified snapshots directly back to your Proton Drive account when needed. Plakar uses deduplication to minimize storage space and bandwidth usage while preserving full snapshot history.\n","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/protondrive/","section":"Plakar Integrations","summary":"","title":"Proton Drive","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/proton-drive/","section":"Tags","summary":"","title":"Proton Drive","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/public-repositories/","section":"Tags","summary":"","title":"Public Repositories","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/qnap/","section":"Tags","summary":"","title":"QNAP","type":"tags"},{"content":" Why protecting S3 data matters # S3 is often treated as the backup, but object storage durability is not the same as data protection. While S3 provides excellent infrastructure resilience, it remains logically vulnerable to:\nSilent Corruption: Data can degrade over time without detection, and standard object storage lacks built-in integrity validation across your entire dataset. Accidental Deletion: Misconfigured lifecycle policies can automatically delete critical objects. Human errors like bulk delete operations happen instantly and affect thousands of objects. Versioning Gaps: Versioning is optional and easy to misconfigure. Even when enabled, it doesn\u0026rsquo;t protect against lifecycle policy deletions or deliberate version purging. Replication Risks: S3 replication is a double-edged sword. It spreads corruption, accidental deletions, and malicious changes just as quickly as legitimate data. For production data, assets, logs, and compliance records, S3 needs an independent safety net beyond buckets and replication.\nWhat happens when S3 credentials are compromised # S3 access is controlled by API keys and IAM policies. These credentials are frequently shared across services, embedded in scripts, or stored in configuration files, creating significant exposure.\nIf credentials are leaked or permissions are too broad:\nTotal Loss: Attackers can delete or overwrite entire buckets through the API. Automated scripts can wipe thousands of objects in seconds. Ransomware Encryption: Malicious actors can encrypt all bucket contents, making your data inaccessible without paying a ransom. Damage Propagation: Replication and sync jobs immediately propagate malicious changes across regions and accounts, amplifying the impact. Version Manipulation: Even with versioning enabled, attackers can delete object versions, configure aggressive lifecycle policies, or simply wait until retention windows expire. No Recovery Path: Without an independent backup, there is no way to \u0026ldquo;undo\u0026rdquo; deletions or modifications in standard object storage. Plakar mitigates these risks by creating immutable snapshots stored outside the live S3 namespace. With end-to-end encryption and support for offline or air-gapped retention, your backups remain secure even if your cloud credentials are compromised.\nHow Plakar secures your S3 workflows # Plakar integrates with S3 as a flexible bridge, enabling secure data movement in multiple directions:\nSource Connector: Take snapshots of one or multiple S3 buckets. Plakar encrypts and deduplicates the content before saving it to a trusted Kloset Store, creating an independent backup layer. Storage Connector: Use S3-compatible storage (AWS S3, MinIO, Ceph, Wasabi) as your Kloset backend. Store encrypted, deduplicated, and versioned snapshots from any source like databases, file systems, containers, or other cloud services. Destination Connector: Restore verified snapshots back into S3, whether to your original bucket, a different region, or an entirely separate account. This enables multiple backup strategies:\nPull data from production buckets into isolated backup storage Push encrypted snapshots to low-cost or off-cloud object storage Separate backup credentials from production IAM roles for improved security Plakar also allows direct inspection of backups. You can browse, search, or verify the integrity of your S3 data via the CLI or UI without performing a full restore first.\nInstead of relying solely on S3 configuration and access controls, Plakar provides cryptographic guarantees and operational control over your data all the way from snapshot creation to integrity verification to recovery.\n","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/s3/","section":"Plakar Integrations","summary":"","title":"S3","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/s3/","section":"Tags","summary":"","title":"S3","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/s3-compatible/","section":"Tags","summary":"","title":"S3-Compatible","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/saas/","section":"Tags","summary":"","title":"SaaS","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/san/","section":"Tags","summary":"","title":"SAN","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/scaleway/","section":"Tags","summary":"","title":"Scaleway","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/scality/","section":"Tags","summary":"","title":"Scality","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/scripting/","section":"Tags","summary":"","title":"Scripting","type":"tags"},{"content":" Why protecting SFTP matters # SFTP is a standard for secure file transfers across Linux servers, BSD hosts, and NAS devices like Synology and QNAP. However, secure transfer is not the same as a backup strategy.\nWhile SFTP secures the data in transit, it does not protect the data once it arrives where you want to store it. Standard SFTP setups face several risks:\nLack of Versioning: Files can be overwritten or deleted immediately after upload. No Immutability: There are no permanent snapshots to revert to if a file is corrupted. Operational Risk: Manual transfers or custom scripts are error-prone and difficult to audit. When compliance, uptime, or disaster recovery is critical, simply storing files on SFTP is not enough. You need verifiable, immutable backups that can survive mistakes, misconfigurations, or attacks.\nSecurity and Compromise # SFTP relies on SSH keys or passwords. If these credentials are leaked or an account is compromised, your data is at risk:\nData Loss: Unauthorized access can be used to delete or overwrite entire directories instantly. Corruption: Ransomware or rogue scripts can encrypt live data on the server. Synchronization Issues: Automated sync tools can unintentionally spread corruption from one server to another. Without independent snapshots, recovery from these events can be impossible. Plakar closes this gap by providing a system where every snapshot acts as an immutable version that cannot be altered or deleted. This ensures your history remains intact even if the SFTP server itself is compromised.\nPlakar allows for direct inspection of backups, you can easily browse, search, or verify the integrity of your data via the CLI or UI without needing to perform a full restore first.\nHow Plakar secures your SFTP workflows # Plakar turns any SFTP-accessible server into a flexible backup system by acting as a bridge between your data and your storage. By using deduplication, Plakar ensures that only unique data chunks are stored, keeping storage costs low.\nYou can use Plakar through several integration points:\nSource Connector: Take snapshots of files located on a remote SFTP server and bring them into a secure Plakar Kloset. Storage Connector: Use an SFTP server as the vault to store your encrypted, deduplicated Plakar backups. Destination Connector: Restore your snapshots to any SFTP server in your environment. This flexibility allows you to choose the backup model that fits your needs:\nPush Backups: Send snapshots from source servers to a central storage location independently. Pull Backups: Centrally collect data from multiple remote servers into a single Kloset. Plakar ensures your SFTP-based infrastructure remains resilient, secure, and verifiable from creation to recovery.\n","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/sftp/","section":"Plakar Integrations","summary":"","title":"SFTP","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/sftp/","section":"Tags","summary":"","title":"SFTP","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/smb/","section":"Tags","summary":"","title":"SMB","type":"tags"},{"content":" Why use Streams instead of Files # Plakar’s Stdio integration can back up any data stream provided via stdin like database dumps, system logs, command output or custom scripts. Unlike workflows that first create intermediate files, streaming data directly avoids several challenges:\nStorage Waste: You need enough free space to hold the uncompressed export before it even gets to your backup tool. Security Risks: Temporary files often sit on your disk unencrypted while waiting to be backed up. Complexity: You have to manage the creation and deletion of these intermediate files. With the Stdio integration, Plakar reads data as it’s generated, encrypts and deduplicates it, and streams it straight to your Kloset Store without creating any intermediate files on disk.\nAutomation with Stdio # Stdio is useful for administrators and power users building automated backups while avoiding temporary files. Data from scripts, commands, or applications can be backed up directly.\nWhen recovery is needed, streams can be fed back into databases, tools, or terminals immediately, with full integrity verification. You can also inspect, browse, and search backups via the CLI or UI without performing a full restore first.\nHow Plakar handles your data streams # Plakar handles live data directly:\nSource Connector: Capture output from any command or script and save it as a named object in a snapshot in Kloset. Destination Connector: Stream your saved data back into any tool or display it directly in your terminal without writing a file to disk first. Common Questions # 1. What kind of data can I back up this way?\nAnything that produces text or binary output, such as database dumps, system logs, or diagnostic scripts.\n2. Do I need to name the stream?\nYes. Streams don’t have filenames, so you assign a name to identify and retrieve them later.\n3. Can I pipe a backup directly into another program?\nYes. You can restore a specific object from a snapshot and feed it directly into a tool like a database importer for fast recovery.\n","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/stdio/","section":"Plakar Integrations","summary":"","title":"Stdio","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/stdio/","section":"Tags","summary":"","title":"Stdio","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/categories/storage-connector/","section":"Categories","summary":"","title":"Storage Connector","type":"categories"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/synology/","section":"Tags","summary":"","title":"Synology","type":"tags"},{"content":" Using tar archives with Plakar # Tar has been a standard Unix archive format for decades, commonly used to bundle files for distribution, transfer, and simple backups. It is widely supported and highly portable.\nTar archives themselves do not provide backup semantics such as deduplication, version tracking, or built-in encryption. Managing backups as collections of independent tar files becomes difficult as data evolves over time.\nTar archives are imported into Plakar, which handles storage, deduplication, and verification.\nBacking up tar archives as snapshots with Plakar # Imported tar archives are stored in Plakar as immutable snapshots and deduplicated to reduce storage usage.\nSnapshots can be browsed and inspected through the CLI or UI without extracting files. When needed, snapshots can be exported back to standard tar archives compatible with existing Unix tools.\nMigration and Compatibility # Tar integration allows Plakar to fit into existing tar-based backup workflows. It enables you to:\nImport existing tar archives into a single, deduplicated Kloset store Maintain compatibility with systems and tools that require tar format Export snapshots as tar archives for distribution or compliance Use tar alongside Plakar during transitions between backup workflows How Plakar works with tar archives # Plakar supports tar archives as an import format:\nSource Connector: Import data from tar archives into Plakar as snapshots with automatic deduplication. Exporting snapshots as tar archives is supported by default in Plakar.\n","date":"23 March 2026","externalUrl":null,"permalink":"/integrations/tar/","section":"Plakar Integrations","summary":"","title":"Tar","type":"integrations"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/tar/","section":"Tags","summary":"","title":"Tar","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/unix/","section":"Tags","summary":"","title":"Unix","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/categories/viewer/","section":"Categories","summary":"","title":"Viewer","type":"categories"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/wasabi/","section":"Tags","summary":"","title":"Wasabi","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/workspace/","section":"Tags","summary":"","title":"Workspace","type":"tags"},{"content":"TL;DR:\nThe team at FactorFX built a Proxmox integration for Plakar that wraps Proxmox’s native vzdump backups and stores them as deduplicated Plakar snapshots, making VM and container backups portable, encrypted, and easy to restore across clusters.\nProxmox support had been requested many times\u0026hellip; but the interesting part is that we didn\u0026rsquo;t end up building it ourselves.\nThroughout 2025, the topic of virtual machine backups came up every now and then.\nWe knew we wanted to support it, but we weren\u0026rsquo;t convinced it was the highest priority. There are many things people want to back up, and virtual machines didn\u0026rsquo;t seem more urgent than S3, GCS or PostgreSQL in daily discussions.\nThen something interesting happened.\nMathieu and I were holding a booth at the Capitole du Libre 2025, in Toulouse, and we kept getting the same question from visitors:\nCan Plakar back up Proxmox clusters?\nSo many people asked that Mathieu started looking into how Proxmox backups worked.\nLater, François-Xavier, who was also holding a booth for FactorFX, came over and told me:\nHey Gilles, we need to talk about Proxmox support in Plakar!\nA couple of weeks later, at the Tech Rocks event, the exact same scenario unfolded with a completely different crowd.\nAt that point, it became clear this was no longer a \u0026ldquo;should we prioritize Proxmox\u0026rdquo; debate, but a \u0026ldquo;who does it and when\u0026rdquo;.\nFactorFX enters the game ! # Technically, our team can write most integrations fairly quickly, anywhere between an hour and a few days depending on the complexity.\nWhich makes it tempting to just do them ourselves: spend a few hours on it and the problem is solved.\nBut that\u0026rsquo;s not what we want.\nOur goal is for Plakar to become a de-facto standard to back up anything, and that can\u0026rsquo;t happen if we\u0026rsquo;re the only ones writing integrations.\nThe ecosystem of tools out there is simply too large.\nInstead, we want a layered ecosystem:\nofficial integrations maintained by us third-party integrations that we review and stamp as \u0026ldquo;trusted by Plakar\u0026rdquo; community integrations where anyone can build support for whatever software they like Since FactorFX had already shown interest in Proxmox, this felt like the perfect opportunity to bootstrap that model.\nAfter a few discussions, Gilles Dubois came back a few days later with a working Proxmox integration !\nWhat is Proxmox ? # Proxmox Virtual Environment (PVE) is an open-source virtualization platform used to run and manage virtual machines and containers.\nAt a technical level, it combines several well-known pieces of infrastructure software:\nKVM to run virtual machines LXC to run containers multiple storage backends such as ZFS, Ceph or simple directory storage a web UI and REST API to manage everything Proxmox includes many of the features people expect from enterprise virtualization platforms, including live migration, high availability, storage replication, built-in backups and clustering. All of this in a single system that is relatively easy to deploy and operate.\nOver the past year, it has also become a very popular alternative to other hypervisors, especially as many organizations started reconsidering their virtualization stack following the VMware/Broadcom flustercluck.\nWhich explains why, every time we showed Plakar at a conference booth, someone eventually asks:\nThat\u0026rsquo;s cool\u0026hellip; but can it back up Proxmox?\nWhy use Plakar for Proxmox backups? # Proxmox already includes a backup tool called vzdump, and it works very well, so why introduce another tool in the mix?\nThe answer is that Plakar does not replace Proxmox backups, it extends them.\nThe integration simply relies on vzdump to generate the backup archives, and then stores them inside Plakar snapshots. This means the backups behave exactly like native Proxmox backups, while gaining a few extra properties along the way.\nFor example, Plakar deduplicates data across snapshots. If multiple virtual machines share the same base image, that data only needs to be stored once.\nSnapshots can also be archived to different storage backends, making it easy to keep long-term backups on object storage or cold storage.\nFinally, because Plakar integrations share the same connector model, data is not tied to a single environment. A VM backed up from Proxmox could later be restored to another cluster, archived elsewhere, or inspected without restoring the entire machine.\nInstalling the Proxmox integration # The proxmox integration has been committed to a public repository and is only available for plakar starting with v1.1.0-beta.\nTo test it, you first need to install our latest beta of plakar:\n$ go install github.com/PlakarKorp/plakar@v1.1.0-beta.7 You can then either use our prebuilt package by authenticating to our platform:\n$ plakar login [...] $ plakar pkg add proxmox Or build the integration yourself\u0026hellip;\n$ plakar pkg build proxmox /usr/bin/make -C /var/folders/9x/9k0f6mc10sbd0_kfx63__fvc0000gn/T/build-proxmox-v1.1.0-rc.1-4157532844 83b7da91: OK ✓ / 83b7da91: OK ✓ /manifest.yaml 83b7da91: OK ✓ /proxmoxExporter 83b7da91: OK ✓ /proxmoxImporter Plugin created successfully: proxmox_v1.1.0-rc.1_darwin_arm64.ptar \u0026hellip; and install the resulting ptar:\n$ plakar pkg add ./proxmox_v1.1.0-rc.1_darwin_arm64.ptar Aaaaaand that\u0026rsquo;s it.\nLocal vs remote operation # Before showing how it\u0026rsquo;s used, a few words about how it works.\nThe integration supports two operating modes.\nIn local mode, Plakar runs directly on the Proxmox node:\nProxmox node ├ vzdump └ plakar This is the simplest setup.\nIn remote mode, Plakar runs on a separate machine and connects to the Proxmox host via SSH:\nBackup server │ │ SSH ▼ Proxmox node └ vzdump This allows a single Plakar instance to back up multiple hypervisors.\nBacking up virtual machines and containers # Once the integration is installed, backing up Proxmox virtual machines and containers becomes straightforward.\nFirst, we configure a Proxmox source:\n$ plakar source add myProxmox proxmox+backup://10.0.0.10 \\ mode=remote \\ conn_username=root \\ conn_identity_file=/path/to/key \\ conn_method=identity Then we can start backing up workloads.\nFor example, to back up a single virtual machine:\n$ plakar backup -o vmid=101 @myProxmox Or all the machines in a pool:\n$ plakar backup -o pool=prod @myProxmox Your browser does not support the video tag. Or even the entire hypervisor:\n$ plakar backup -o all @myProxmox Under the hood, the integration invokes vzdump, collects the resulting archive, and ingests it into a Plakar snapshot.\nOnce stored, the backup benefits from all Plakar features such as deduplication, encryption and snapshot browsing.\nRestoring virtual machines and containers # Restoring workloads is equally straightforward.\nFirst, configure a Proxmox destination:\n$ plakar destination add myProxmox \\ proxmox+backup://10.0.0.10 \\ mode=remote \\ conn_username=root \\ conn_identity_file=/path/to/key \\ conn_method=identity Then restore a snapshot:\n$ plakar restore -to @myProxmox \u0026lt;snapid\u0026gt; The integration uploads the dump archive to the Proxmox node and restores it using native tools:\nqmrestore for virtual machines pct restore for containers It is also possible to restore only one VM from a snapshot containing multiple machines:\n$ plakar restore -to @myProxmox \u0026lt;snapid\u0026gt;:/backup/qemu/101_myvm Your browser does not support the video tag. If configured, the restored machine can automatically start once the restore completes.\nBeyond simple backups # Because Plakar integrations share the same connector model, data is not locked into a single environment.\nFor example, a virtual machine backed up from Proxmox could be:\ninspected with high granularity using the Plakar UI. stored in a minio instance, err, an S3 bucket at Scaleway, OVH or Exoscale. synchronized between stores at Scaleway, OVH and Exoscale for multiple copies. exported to a ptar archive and archived in cold storage. restored to another cluster. This flexibility enables backup workflows that go far beyond traditional hypervisor backups.\nWrapping up # This Proxmox integration is still early, but it\u0026rsquo;s already working well.\nIf you\u0026rsquo;re running a Proxmox cluster, be it on-premise or in the cloud, please give it a try and let us know what you think!\nAnd if you\u0026rsquo;re interested in writing integrations yourself, take a look at what the FactorFX team achieved: they had an initial version working in just a few days, with no prior experience whatsoever with our codebase.\nWe can help you get started and bootstrap things, and you might just end up building that one integration that everyone wants but nobody has written yet!\n","date":"16 March 2026","externalUrl":null,"permalink":"/posts/2026-03-16/backing-up-proxmox-with-plakar-a-third-party-integration-built-in-a-few-days/","section":"Plakar Blog","summary":"The team at FactorFX built a Proxmox integration for Plakar that wraps Proxmox’s native vzdump backups and stores them as deduplicated Plakar snapshots, making VM and container backups portable, encrypted, and easy to restore across clusters.","title":"Backing up Proxmox with Plakar: a third-party integration built in a few days","type":"posts"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/authors/gilles/","section":"Authors","summary":"","title":"Gilles","type":"authors"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/tags/proxmox/","section":"Tags","summary":"","title":"Proxmox","type":"tags"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/tags/aks/","section":"Tags","summary":"","title":"AKS","type":"tags"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/tags/containers/","section":"Tags","summary":"","title":"Containers","type":"tags"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/tags/csi/","section":"Tags","summary":"","title":"CSI","type":"tags"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/tags/eks/","section":"Tags","summary":"","title":"EKS","type":"tags"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/tags/gke/","section":"Tags","summary":"","title":"GKE","type":"tags"},{"content":" Why protecting Kubernetes clusters matters # Kubernetes manages the full lifecycle of your workloads, but it does not protect the data that keeps those workloads running. Three distinct layers are at risk:\netcd: The key-value store that holds all cluster state. If too many nodes fail simultaneously, etcd cannot recover on its own. Without an independent backup, the cluster configuration is gone. Manifests: Resource definitions, namespace configurations, and workload specs can be accidentally deleted, overwritten by bad deployments, or lost during a cluster migration. Kubernetes versioning does not give you a restore point. Persistent Volumes: Stateful workloads store data in PVCs that live outside the cluster\u0026rsquo;s built-in resilience model. A misconfigured storage class, a deleted PVC, or a failed migration can result in permanent data loss. Each layer requires a different backup strategy. Plakar handles all three.\nWhat happens when a cluster is compromised? # Kubernetes clusters are increasingly targeted by attackers who gain access through misconfigured RBAC, leaked credentials, or supply chain vulnerabilities. The consequences can be severe:\nTotal state loss: With sufficient API access, an attacker can delete namespaces, wipe persistent volumes, and corrupt etcd — in seconds. Ransomware on persistent storage: PVCs attached to compromised pods can be encrypted or exfiltrated without any cluster-level protection. No clean rollback: Without independent snapshots stored outside the cluster, there is no verified state to recover from. Plakar stores snapshots in an isolated Kloset, encrypted end-to-end and independent of the cluster itself. The backups remain intact even if the cluster is fully compromised.\nHow Plakar protects your Kubernetes infrastructure # Plakar covers Kubernetes backups at three levels, each independent and composable:\netcd backup: A full snapshot of cluster state, intended as the last line of defense in a catastrophic failure scenario. Manifest backup: All Kubernetes resources across the cluster (or scoped to a specific namespace) stored as a browsable, searchable Plakar snapshot. Restore the full cluster, a single namespace, or one deployment. Browse past snapshots to investigate what the cluster looked like at any point in time. Persistent volume backup: PVC contents captured via CSI driver snapshots, ingested into a Kloset store, and restorable into any PVC, on the same cluster or a different one. Because Plakar connectors are composable, data is not locked to a single environment. A persistent volume backed up from one cluster can be restored to another, archived to S3, or exported as a portable ptar archive.\n","date":"16 March 2026","externalUrl":null,"permalink":"/integrations/kubernetes/","section":"Plakar Integrations","summary":"","title":"Kubernetes","type":"integrations"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/tags/kvm/","section":"Tags","summary":"","title":"KVM","type":"tags"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/tags/lxc/","section":"Tags","summary":"","title":"LXC","type":"tags"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/tags/persistent-volumes/","section":"Tags","summary":"","title":"Persistent Volumes","type":"tags"},{"content":" Why protecting Proxmox data matters # Proxmox includes strong built-in backup capabilities, but backup archives stored on the same cluster or storage backend as your workloads are not truly independent. A single failure, misconfiguration, or attack can affect both your live machines and the backups protecting them.\nCommon risks in Proxmox environments:\nSingle-cluster exposure: Backups stored locally or on cluster-attached storage share the same failure domain as the machines they protect. No cross-environment portability: Native Proxmox backups are tightly coupled to the cluster that created them, making restoration to a different environment difficult. Storage misconfiguration: Aggressive retention policies or accidental deletions can wipe backup archives before they are needed. No integrity verification: Without cryptographic validation, there is no reliable way to confirm a backup is intact until you attempt a restore. For production workloads, compliance requirements, or multi-cluster environments, Proxmox needs an independent safety net beyond what vzdump provides on its own.\nWhat happens when a Proxmox cluster is compromised? # Proxmox exposes a web UI and REST API that are typically accessible over the network. If an attacker gains access to a node or the management interface:\nTotal Loss: Virtual machines and containers can be deleted or overwritten through the API. Backup archives stored on the same infrastructure are equally exposed. Ransomware Encryption: Malicious actors can encrypt live machine disks and backup archives simultaneously, leaving no clean copy to recover from. No Recovery Path: Without an independent, immutable backup stored outside the cluster, there is no way to recover deleted or encrypted workloads. Plakar mitigates these risks by storing snapshots in an isolated Kloset, encrypted end-to-end, and independent of the Proxmox cluster itself. Even if the cluster is fully compromised, your backups remain intact.\nHow Plakar secures your Proxmox workflows # Plakar integrates with Proxmox by wrapping its native vzdump tool. When a backup runs, vzdump generates the archive as it normally would, and Plakar ingests it into a snapshot. The result is a standard Proxmox backup with additional guarantees layered on top.\nSource Connector: Back up individual VMs, containers, pools, or entire hypervisors into a secure Plakar Kloset, encrypted and deduplicated automatically. Destination Connector: Restore verified snapshots back to any Proxmox node, whether the same cluster or a different one, using the native restore tools Proxmox already understands. Plakar supports both local and remote operation:\nLocal mode: Plakar runs directly on the Proxmox node alongside vzdump. Remote mode: Plakar runs on a separate backup server and connects to one or more Proxmox nodes over SSH, enabling a single instance to manage backups across an entire fleet of hypervisors. This enables backup strategies that go beyond what Proxmox offers natively:\nStore encrypted VM snapshots on object storage such as S3, Scaleway, OVH, or Exoscale. Deduplicate across machines that share a common base image, storing shared data only once. Inspect backups at a granular level via the CLI or UI without performing a full restore. Restore individual VMs from a snapshot that contains multiple machines. Archive snapshots to cold storage or export them as portable ptar archives. ","date":"16 March 2026","externalUrl":null,"permalink":"/integrations/proxmox/","section":"Plakar Integrations","summary":"","title":"Proxmox","type":"integrations"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/tags/virtual-machines/","section":"Tags","summary":"","title":"Virtual Machines","type":"tags"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/tags/virtualization/","section":"Tags","summary":"","title":"Virtualization","type":"tags"},{"content":"","date":"16 March 2026","externalUrl":null,"permalink":"/tags/vzdump/","section":"Tags","summary":"","title":"Vzdump","type":"tags"},{"content":"TL;DR:\nWe built a Kubernetes integration for Plakar that backs up clusters at three levels: etcd (disaster recovery), manifests (granular restore and inspection), and persistent volumes (via CSI snapshots). This enables full cluster recovery, fine-grained restores, and data portability across environments.\nAfter joining the Linux Foundation and the CNCF, we started to attend some events, like the Cloud Native Days in Paris or the upcoming KubeConf in Amsterdam. While we\u0026rsquo;re already providing a large number of integrations, we felt we couldn\u0026rsquo;t go empty-handed to these events; we had to announce and present something new-something like a Kubernetes integration.\nFrom left to right: Omar, Julien, Antoine \u0026amp; Gilles at our Cloud Native Days booth. I\u0026rsquo;ve worked a lot with Kubernetes in the last years, but it was mostly as a user and in a particular environment: strict adherence to a GitOps flow, managed Kubernetes, and almost no usage of any PVCs since all the data was in managed databases or on buckets.\nSo this has also been a chance for me to dive into the Kubernetes Golang APIs and into the workings of CSI-backed drives.\nInstalling the k8s integrations # At the time of this writing, the etcd and k8s integrations have been committed to public repositories and are only available for plakar v1.1.0-beta.\nTo test them, you first need to install our latest beta of plakar:\n$ go install github.com/PlakarKorp/plakar@v1.1.0-beta.4 This is needed for the commands of this article to succeed !\nDisaster recovery with etcd # To provide a complete solution, I decided to tackle the backup strategy in multiple levels. The lowest level is keeping etcd safe.\netcd is a distributed key-value store for distributed systems. It\u0026rsquo;s often used as the single source of truth in Kubernetes clusters.\nUnder normal circumstances, etcd can resist a partial disruption of the nodes of its cluster, but if too many nodes fail, it might not recover. Given how critical this piece is, it\u0026rsquo;s important to have a sound disaster recovery strategy.\nFor this, we\u0026rsquo;ve just release a first version of the etcd integration: backing up etcd is now as easy as:\n$ plakar pkg add etcd $ plakar backup etcd://node1:2379 Unfortunately, due to how etcd restore works, it\u0026rsquo;s difficult to do so in a granular way, so this is really about the last line of defense in case of a wide cluster disruption.\nTo inspect or restore the state of the cluster in a more granular way we need to handle the manifests.\nSaving the manifests # The second layer is backing up the manifests: these represent all the workloads on the cluster at a given time, with extra metadata about their current state as well.\nAt this layer, it\u0026rsquo;s easier to browse the content of the backups, investigate the differences between snapshots, or restore the resources in a granular way:\nrestoring the whole cluster configuration restoring just one namespace or even restoring a single Deployment. This is part of what the kubernetes integration does: fetches all the manifests, the resources, present on the cluster for archival with Plakar.\n$ plakar pkg add k8s $ plakar backup k8s://localhost:8001 The presence of the status metadata in the backup also unlocks other uses: for example, it may help investigate incidents since it\u0026rsquo;s easily possible from the UI to browse what was happening at a specific time in the cluster (the nodes available, the state of the deployments, etc.), in addition to existing monitoring tools.\nWhat about the data? # Even if Kubernetes was not initially designed for stateful workloads, in practice it\u0026rsquo;s normal to have Persistent Volumes attached to pods, and these need to be protected as well.\nThe other main job of the kubernetes integration is to provide a way to back up and restore the contents of persistent volumes. Incidentally, this was also the most complicated part for me to implement.\nI owe a lot to Mathieu and Gilles for helping me on this journey, providing support when I was in a pinch, and for brutally simplifying the design to make the integration easier to develop and use-and more powerful, too. When working alone, it\u0026rsquo;s easy to fall for the temptation of writing \u0026ldquo;clever\u0026rdquo; code that ends up being fairly complex and just plain weird to use.\nWe started with CSI-backed PVCs, as they represent the de facto standard for persistent storage in Kubernetes clusters.\n$ plakar pkg add k8s $ plakar backup k8s+csi://localhost:8001/prod/my-pvc The integration works by first creating a snapshot of a given PVC. Then, when it\u0026rsquo;s ready, it mounts it in a pod running a small helper that runs our filesystem importer. Plakar connects to it and ingests the data. Finally, the PVC snapshot gets deleted from the Kubernetes cluster.\nRestoring works in a similar way, except that no snapshot is taken.\nA powerful feature provided by Plakar is that it is possible to mix and match connectors, so, for example, it\u0026rsquo;s possible to restore an etcd snapshot in, say, a persistent volume in a Kubernetes cluster, or to move data from a PVC to an S3 bucket. The sky is the limit!\nWrapping up # What lies ahead is to keep testing the integration across different flavors of Kubernetes distributions and providers, and extend the support for non-CSI volumes. If you\u0026rsquo;re running a Kubernetes cluster, be it on premise or managed somewhere, please don\u0026rsquo;t hesitate to give it a try and let us know what you think!\n","date":"18 February 2026","externalUrl":null,"permalink":"/posts/2026-02-18/backing-up-kubernetes-clusters-with-plakar/","section":"Plakar Blog","summary":"We built a Kubernetes integration for Plakar that backs up clusters at three levels: etcd (disaster recovery), manifests (granular restore and inspection), and persistent volumes (via CSI snapshots). This enables full cluster recovery, fine-grained restores, and data portability across environments.","title":"Backing up kubernetes clusters with Plakar","type":"posts"},{"content":"","date":"18 February 2026","externalUrl":null,"permalink":"/authors/op/","section":"Authors","summary":"","title":"Op","type":"authors"},{"content":"","date":"7 February 2026","externalUrl":null,"permalink":"/tags/oci/","section":"Tags","summary":"","title":"Oci","type":"tags"},{"content":"","date":"7 February 2026","externalUrl":null,"permalink":"/tags/storage/","section":"Tags","summary":"","title":"Storage","type":"tags"},{"content":"TL;DR:\nAfter a podcast suggestion, we built an OCI registry storage backend for Plakar\u0026hellip; and it took ~30 minutes. OCI registries (the tech behind Docker Hub) are content-addressed, immutable artifact stores, which map surprisingly well to Plakar’s packfile model. It’s now available in beta (plakar pkg add oci), fully working and testable.\nThree weeks ago, Julien and I were invited by Bret Fisher on his podcast, Cloud Native DevOps and Docker Talk.\nThe discussion was very interesting and, after the show, Bret casually mentioned that we should really add an OCI registry storage integration.\nUp to that point, my knowledge of OCI registries came mostly from conversations with SREs, I had never worked with them directly. But since I had claimed during the show that writing a storage integration was easy, it felt like the perfect opportunity to put code where my mouth is.\nThe next day, after wrapping my work, I jumped on the task. Half an hour later, I announced on our Discord that the OCI registry integration was already working.\nThis is not a flex, and it’s not because I’m particularly fast or good.\nIt happened that quickly because extending Plakar is genuinely simple. We designed the storage layer so that adding a new backend is trivial. I honestly believe we’ve lowered the bar enough that a first-year computer science student could implement a new Plakar storage backend as a weekend project.\nThat’s exactly the kind of extensibility we were aiming for: not just possible, but boringly easy.\nFirst of all, what is an OCI Registry? # An OCI registry is a standardized service for storing, versioning, and distributing binary artifacts. While best known for container images, it is not limited to them.\nOCI stands for Open Container Initiative, an open governance body that defines vendor-neutral standards for containers. One of these standards, the OCI Distribution Specification, defines how clients and registries communicate over HTTP to push, pull, and manage artifacts.\nIn short, an OCI registry is a content-addressable, HTTP-based artifact store with strong guarantees around immutability and integrity.\nFrom container images to general artifacts # OCI registries were originally designed for container images (for example, Docker Hub), but the underlying model is generic.\nAn OCI artifact consists of:\none or more blobs (binary data) a manifest describing those blobs optionally an index to group variants identifiers based on cryptographic digests, not filenames Because of this, OCI registries are now used to distribute much more than containers, including Helm charts, WASM modules, VM images, CLI tools, plugins, policies, signatures, and other software artifacts.\nContent-addressed and immutable by design # All content in an OCI registry is addressed by its hash (typically SHA-256). This provides:\nIntegrity: pulled content is exactly what was pushed Deduplication: identical blobs are stored once Immutability: any change results in a new digest Efficient caching: blobs can be safely cached and reused Tags such as latest or v1.2.3 are merely mutable references. The true identity of an artifact is its digest.\nWhy OCI registries in our context ? # OCI registries have become a universal distribution layer because they are standardized, widely deployed, secure, and efficient at scale. They are also deeply integrated into modern CI/CD ecosystems.\nIf you need a reliable way to publish, version, and retrieve large or immutable artifacts, an OCI registry is often the simplest and most robust solution, and you don’t need to be shipping containers to benefit from it.\nHow does it tie to Plakar ? # In many aspects, the kloset storage layer is very similar to an OCI registry and the primitives map fairly well one to another.\nDuring backups, Plakar will generate packfiles that are MAC-indexed in kloset and will locate the data through that MAC. The storage layer can store these packfiles in the OCI registry and map the MAC to the blob through a manifest. With this plumbing in place, calls are mapped 1:1 with the registry with manifests being used to locate the blob by MAC.\nInstalling the OCI registry integration # At the time of this writing, the OCI registry integration has been committed to a public repository and is only available for plakar v1.1.0-beta.\nTo test it, you first need to install our latest beta of plakar:\n$ go install github.com/PlakarKorp/plakar@v1.1.0-beta.4 You can their either use our prebuilt package by authenticating to our platform:\n$ plakar login [...] $ plakar pkg add oci Or build the integration yourself\u0026hellip;\n$ plakar pkg build oci /usr/bin/make -C /var/folders/9x/9k0f6mc10sbd0_kfx63__fvc0000gn/T/build-oci-v1.1.0-beta.4-510526837 48317f2d: OK ✓ / 48317f2d: OK ✓ /manifest.yaml 48317f2d: OK ✓ /ociStorage Plugin created successfully: oci_v1.1.0-beta.4_darwin_arm64.ptar \u0026hellip; and install the resulting ptar:\n$ plakar pkg add ./oci_v1.1.0-beta.4_darwin_arm64.ptar That\u0026rsquo;s it, you\u0026rsquo;re good to go !\nLaunching a registry container # The easiest way to test is to simply start a docker container for the OCI registry.\nThis can be done with this simple command, which will run the container and bind the registry to port 5000 on localhost:\n$ docker run -d --name oci-registry \\ -p 5000:5000 \\ -v $(pwd)/registry-data:/var/lib/registry \\ registry:2 b61a4bc5df40307b6301d30f692cd276db64acd8448258ba49f2a4c6c760cb8c Using Plakar with the registry container # To avoid having to type the passphrase over and over in this article, I just start by setting the PLAKAR_PASSPHRASE environment variable to a key that I generated with openssl rand -hex 32:\n$ export PLAKAR_PASSPHRASE=6292d531ecede679b5e4afbbe9ce994a78c9c7986c742e97232f2730b8bfb5df Once this is done, I can create the store:\n$ plakar at oci://localhost:5000/helloworld create Then backup my current directory:\n$ plakar -silent at oci://localhost:5000/helloworld backup The snapshot is now in store and can be inspected as usual:\n$ plakar at oci://localhost:5000/helloworld ls 2026-01-16T22:22:25Z d61ae1c6 216 MiB 1s /Users/gilles/Wip/github.com/PlakarKorp/plakar Including its content:\n$ ./plakar at oci://localhost:5134/helloworld cat d61:LICENSE Copyright (c) 2021 Gilles Chehade \u0026lt;gilles@poolp.org\u0026gt; Permission to use, copy, modify, and distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED \u0026#34;AS IS\u0026#34; AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Everything is browsable as with any other storage through our UI:\n$ plakar at oci://localhost:5000/helloworld ui Limitations # This is the first iteration of the integration. It is not production-ready and currently lacks some important features, such as authentication.\nThe goal was to build a functional proof of concept first, it will now need polishing and hardening before real-world use.\nConclusion # This integration was not originally planned; it emerged from a casual discussion.\nEven without prior knowledge of OCI registries, it took roughly half an hour to implement, which is a good demonstration of how easy it is to extend Plakar to new use cases.\nLet us know what you think, and don’t hesitate to suggest new ideas!\n","date":"7 February 2026","externalUrl":null,"permalink":"/posts/2026-02-07/storing-backups-in-an-oci-registry/","section":"Plakar Blog","summary":"After a podcast discussion, we implemented an OCI registry storage backend. This article discusses the concept and showcases our proof of concept.","title":"Storing backups in an OCI registry","type":"posts"},{"content":"","date":"26 January 2026","externalUrl":null,"permalink":"/tags/open-source/","section":"Tags","summary":"","title":"Open-Source","type":"tags"},{"content":"","date":"26 January 2026","externalUrl":null,"permalink":"/tags/plakar/","section":"Tags","summary":"","title":"Plakar","type":"tags"},{"content":"In case you missed it, here is a video recap of our 2025 retrospective. It highlights just how impressive the amount of work delivered by our team has been, with meaningful progress and achievements every single month throughout 2025.\nI cannot overstate how proud I am to be part of a team that maintains such a strong focus on quality, while still moving at a very high pace and delivering consistently, week after week.\nAt this pace, whenever we go more than a week without communicating, it feels like we have been silent for ages. In reality, it has only been a couple of months since our last community release and just one month since our enterprise preview release.\nThat said, we have not shared many updates about the community edition since November, as our focus has been on building the enterprise edition. The two are far from isolated: they share the same building blocks, and a significant part of the work done for the enterprise edition now flows back into the community edition.\nTL;DR # v1.1.0-beta is out, stable and fully backward compatible. We expect RC in February and the final release in March. What’s new: a cleaner terminal UI, multi-directory backups (single source), much better FUSE mounting (plus HTTP mounts), and a new package manager for integrations. Reliability: the old agent is gone and replaced by a tiny background service called cached that only manages shared cache and locking while commands run in the CLI. Performance: big wins in our Korpus tests, especially restore speeds. More backup latency improvements are coming with the next optimizations. Memory and disk: peak RAM use is down, and the VFS cache footprint is much smaller by trading a bit of bandwidth for disk space. For integrators: importer/exporter/store interfaces are much simpler, so writing a connector is easier than before. Next up: point-in-time recovery, multi-source snapshots, better store maintenance and repair tools, and packfile work. Try it, tell us what breaks, and help shape the final release. Announcing plakar v1.1.0-beta # Throughout 2025, we released four updates to the v1.0.x branch of plakar’s community edition, each bringing its share of improvements and new features. As we begin 2026, roughly two months after our last release, we are entering the beta phase of the v1.1.0 branch, which packs a lot of new capabilities and internal improvements.\nBeta often implies instability, but in this case most of the work for the v1.1.0 branch was completed during the second half of 2025. Since then, we have already run thousands, if not tens of thousands, of snapshots through this codebase.\nSo while this is still a beta and should not be used on production stores, it is already very stable. We strongly encourage you to try it out, especially since it is fully backward compatible. You can safely create a new store, sync your data to it, and experiment with the beta without impacting existing setups.\nThe goal of this beta phase is to gather user feedback, polish areas we may have missed in both code and documentation, and gain additional confidence before a final release, given the significant amount of work that has gone into this branch.\nOur current plan is to move from beta to release candidate during February, and from release candidate to a final release in March. New development during this period will continue on our main development branch and will only be backported to beta or release candidates when it is clearly low risk.\nHow can you help? # Feel free to join our Discord channel and help us by testing the beta and reporting any issues you encounter. Your feedback is invaluable and directly helps shape the final release.\n$ go install github.com/PlakarKorp/plakar@v1.1.0-beta.3 Active testers can earn a contributor role, allowing them to talk in our hackrooms, and with significant contributions comes significant goodies too :-p\nWhat\u0026rsquo;s new? # The v1.1.0 branch introduces a lot of new features as well as improvements all over the place.\nTerminal UI # Starting with the most visible change.\nUntil now, plakar’s terminal output was very verbose. While this provided a lot of information, it made it difficult to track progress and quickly identify what mattered during long-running operations.\nYour browser does not support the video tag. In v1.1.0, terminal output has been completely reworked around a new terminal UI rendering interface. We introduced an stdio renderer, which preserves the exact same output format as before, and a new tui renderer that provides a dedicated terminal UI for better visibility during long-running jobs.\nYour browser does not support the video tag. This makes it easier to understand what plakar is doing at a glance, while still retaining access to detailed output when needed. The result is a quieter, more readable terminal experience, especially for long-running backups and restores.\nThe new tui view is available on backup and restore commands but we will progressively cover more commands, such as check or sync as we go.\nMulti-directory support # The v1.1.0 branch introduces support for multi-directory backups.\n$ plakar backup /etc /home Early versions of plakar only supported filesystem-based backups, which made multi-directory snapshots straightforward. When additional integrations were introduced, however, resource naming collisions became possible: a local path such as /etc could clash with an object path like s3://bucket/etc.\nTo avoid ambiguity, multi-directory support was temporarily removed until it could be implemented in a clear and unambiguous way across all integrations.\nWith v1.1.0, this limitation is now lifted and multi-directory backups are once again supported on a single source. Work to unlock multi-source backups has begun but could not be completed in time for this release, hopefully it can land in the next one.\nBetter FUSE and mounting # FUSE (Filesystem in Userspace) allows plakar snapshots to be mounted as a regular filesystem, making it possible to browse snapshot contents as if they were present on disk. This makes it possible to use your operating system’s standard tools on data contained in plakar snapshots seamlessly, without having to actually restore the data to disk: structure and data is transparently streamed as it is accessed.\nYour browser does not support the video tag. Our FUSE support was fairly stable on Linux, despite occasional hiccups. On macOS, the situation was more complicated, as FUSE is not supported natively. Using FUSE required installing a kernel extension, which was cumbersome to set up, but for which we provided stable support\u0026hellip; or relying on the FUSE-T implementation, which was friendlier to the users as it uses NFS v4 local server instead of kernel extension, but with which plakar didn\u0026rsquo;t play well with.\nIn v1.1.0, FUSE support has been completely rewritten and significantly improved, making it more reliable, including over high-latency connections.\nAt the same time, the plakar mount command was extended to support mounting specific snapshots or individual snapshot directories. We also introduced support for exposing mounts over HTTP, making it possible to serve a specific directory from a specific snapshot over HTTP.\nWork is ongoing to support additional mount protocols, including S3.\nNew package manager # We have provided a means for users to install integrations through a plugin system in v1.0.0, but our package manager was a bit\u0026hellip; meh.\nOur beta comes with a brand new package manager that\u0026rsquo;s simpler, cleaner and much more featureful. I won\u0026rsquo;t spoil too much as I think we\u0026rsquo;ll have a dedicated article on it, but at the very least I can spoil that it supports integrations updates which is something we were lacking.\nNew integration interfaces # The importer, exporter, and storage interfaces have been redesigned to be simpler and more explicit. This work lays the groundwork for faster iteration and more reliable third-party integrations.\nA new store backend can now be implemented by satisfying a simple interface. In practice, this means that almost anything capable of List, Put, Get, and Delete operations can be used to host a Kloset store:\ntype Store interface { Create(context.Context, []byte) error Open(context.Context) ([]byte, error) Ping(context.Context) error Origin() string Type() string Root() string Flags() location.Flags Mode(context.Context) (Mode, error) Size(context.Context) (int64, error) List(context.Context, StorageResource) ([]objects.MAC, error) Put(context.Context, StorageResource, objects.MAC, io.Reader) (int64, error) Get(context.Context, StorageResource, objects.MAC, *Range) (io.ReadCloser, error) Delete(context.Context, StorageResource, objects.MAC) error Close(ctx context.Context) error } On the importer side, the interface now turns any data source that can be enumerated into a candidate for being backed up by plakar:\ntype Importer interface { Origin() string Type() string Root() string Flags() location.Flags Ping(context.Context) error Import(context.Context, chan\u0026lt;- *connectors.Record, \u0026lt;-chan *connectors.Result) error Close(context.Context) error } Finally, on the exporter side, a symmetrical interface makes it possible to restore data by receiving an enumeration of resources and their contents:\ntype Exporter interface { Origin() string Type() string Root() string Flags() location.Flags Ping(context.Context) error Export(context.Context, \u0026lt;-chan *connectors.Record, chan\u0026lt;- *connectors.Result) error Close(context.Context) error } A side effect of this rework is that an Importer can become the input to an Exporter, something that was not possible with previous interfaces. This will simplify the testing considerably and allow the implementaton of a new \u0026ldquo;transfer\u0026rdquo; capability to synchronize origins and destinations without going through a Kloset!\nThat set aside, with these interfaces, developers can easily extend plakar without needing to understand its internal architecture:\nImplement a few simple CRUD functions and you have a new store. Implement a function that enumerates your dataset and you have a new importer. Implement a function that reconstructs a dataset from an enumeration and you have a new exporter.\nWe believe this significantly lowers the barrier to entry. Writing a first integration can now take just a few hours for newcomers, and only minutes for experienced developers. I think we will start seeing some Twitch sessions of integration development from our team soon :-)\nImproved performance # Contrary to common assumption, backup and restore complexity isn’t driven by total bytes but by per-item work: tree traversal and stat calls, open/close overhead, lots of small random I/O, hashing/chunking each object, dedup lookups and metadata handling, plus the CPU, memory and coordination costs that come with huge file counts and deep directory trees. So beyond a few dozen GiB, total size ceases to be informative: transfer time scales roughly linearly and is determined by raw storage or network throughput, not by the metadata, CPU, and I/O-seek costs that actually make backups hard.\nWe measure performance with Korpus, an assorted collection of resources (low- and high-entropy; small and large; images, audio, video, text, code, PDFs, etc.) laid out across a very large, deep directory tree.\nOp Items v1.0.6 v1.1.0-beta Backup 1.000.000 ~3 minutes ~3 minutes Sync 1.000.000 ~5 minutes ~5 minutes Restore 1.000.000 ~60 minutes ~3 minutes (-95%) Check 1.000.000 ~1 minute ~1 minute *tested on a 14-core mac mini with 64 GiB RAM and NVMe storage.\nNote the dramatic improvement for Restore which is due to several changes:\na better algorithm for restore a better use of parallelism a better use of our prefetch mechanism the removal of some expensive system calls that were not strictly necessary We didn\u0026rsquo;t include most of our backup optimizations in v1.1.0-beta: they\u0026rsquo;re fairly recent and we didn\u0026rsquo;t want them to interfere with the release cycle as we\u0026rsquo;re already happy with unoptimized performances. Most of them will be merged during the beta phase, others may have to wait for the next release in Q2.\nOp Items v1.0.6 v1.1.0-beta (w/optimizations) Backup 1.000.000 ~3 minutes ~2 minutes (-33%) Sync 1.000.000 ~5 minutes ~4 minutes (-20%) Restore 1.000.000 ~60 minutes ~3 minutes (-95%) Check 1.000.000 ~1 minute ~1 minute *tested on a 14-core mac mini with 64 GiB RAM and NVMe storage.\nNote that we also have plans for further optimizations, which we have not yet pushed past the point of initial experimentation, and which show promising results in all cases for future releases.\nImproved RAM usage # We received reports of high memory usage during certain operations. Investigations revealed two separate root causes and we’ve made targeted fixes that already show measurable improvements.\nThe first issue was a gRPC-level memory leak that mostly affected long-running backups using integrations (notably SFTP and S3). We reviewed the gRPC code path end-to-end and applied a series of fixes to eliminate the leak and stabilise long jobs.\nThe second issue stemmed from the third-party cache implementation we were using. Replacing this component was non-trivial as our cache layer must satisfy strict correctness and performance properties and there are few viable alternatives. Over a focused three-month effort we reworked the caching subsystem: while we haven’t yet removed the old implementation completely, the changes already deliver clear RAM reductions and better behaviour under load.\nLastly, we now default to spilling temporary data to disk rather than keeping it all in memory. The performance impact is small, but the memory savings are significant, it is a practical trade that greatly improves stability for large or long-running operations.\nv1.0.6 v1.1.0-beta Backup ~3.0 GiB ~1.3 GiB (-43%) Sync ~3.6 GiB ~1.7 GiB (-52%) Restore ~2.3 GiB ~800 MiB (-66%) Check ~1.3 GiB ~800 MiB (-40%) *tested on a 14-core mac mini with 64 GiB RAM and NVMe storage.\nThere\u0026rsquo;s still some room for improvement but memory usage being a factor of concurrency, this is controlable by reducing concurrency to an amount that suits the RAM requirements.\nImproved cache space # The local cache was taking too much space, so we reworked caching to reduce on-disk usage.\nPlakar uses three caches:\nState cache: not really a cache but a required copy of the store state used to decide whether data needs to be pushed. It’s synchronized before an operation and recreated if missing, so it must remain local. VFS cache: stores metadata for resources as last seen so we can skip work (and avoid recomputing chunks) when a resource appears unchanged. We removed the on-disk VFS cache and now query the store instead, saving local storage in exchange for additional bandwidth. During the beta we’ll add a flag to prefer the previous local-cache behaviour when that makes more sense. Scan cache: a transient cache built during operations and discarded when the operation completes. Removing the on-disk VFS cache has significantly reduced local cache usage for large trees by trading some bandwidth for disk space, while still allowing users to opt back into a local cache to save bandwidth when desired.\nItems v1.0.6 v1.1.0-beta 1,000,000 4 GiB 1.8 GiB (-55%) In short: less local disk usage by default, an explicit option to favour the old local cache if you prefer lower bandwidth, and the required state cache still ensures correctness and fast change detection.\nAgent is dead. Long live cached. # When you use backup software, you expect to be able to run multiple commands in parallel. This implies some level of shared cache and state.\nIn plakar v1.0.0, to coordinate access and handle locking, we introduced an agent process that executed commands on behalf of the CLI. This meant the agent had to be running for anything to work at all, which quickly proved to be a fairly annoying requirement.\nTo address this, plakar v1.0.4 introduced an auto-spawned, auto-teardown agent. While this improved usability, the agent remained on the critical path. Every command was still executed by the agent, with the CLI merely proxying input and output.\nThis design came with drawbacks:\nInteractive prompting was difficult or impossible for some integrations, for example SFTP passphrase prompts. A failure in a single command, including an out-of-memory condition, could take down the agent rather than just the operation. The agent accumulated complexity by combining execution, coordination, and cache management in a single process. With plakar v1.1.0, the agent is gone.\nIt is replaced by cached, a lightweight, auto-managed process dedicated exclusively to shared cache maintenance and locking. cached will automagically start if needed and stop when not needed anymore, so you never have to think about it. Commands now execute directly in the CLI, while cached ensures safe, coordinated access to the cache.\nThis separation of responsibilities simplifies the architecture, dramatically reduces the failure blast radius, unlocks features that were previously difficult to implement, and makes plakar considerably more reliable.\nAnd\u0026hellip; plenty more # It is difficult to write about everything we have been working on while remaining concise, especially since much of this work is not immediately visible.\nSince v1.0.6, there have been hundreds of commits to Kloset, hundreds more to plakar itself, and dozens across various integrations. Altogether, this represents thousands of lines of changes, peer-reviewed and approved, spread over several months and across multiple subsystems of our software.\nSeveral of these changes will be covered in dedicated articles, as we plan to significantly increase our technical writing to this blog this year.\nWhat’s next? # The immediate next step is the stabilization of the beta, followed by the release of v1.1.0-rc and then v1.1.0.\nIn parallel, we are continuing work on several major features. Much of the prerequisite groundwork for these efforts has already been completed as part of the v1.1.0 development cycle, and some of them already have active implementation branches.\nThe next major milestones include:\nPoint-in-time recovery (PITR) support, enabling more robust and precise database backups. Multi-source snapshots, allowing a single snapshot to span multiple data sources. Improved store maintenance and space recovery, through recompaction and packfile optimizations. Store repairability, with mechanisms to recover from corruption, including ECC and cross-store data sharing. Stay tuned, and happy hacking 🚀\n","date":"26 January 2026","externalUrl":null,"permalink":"/posts/2026-01-26/plakar-v1.1.0-beta-the-foundation-for-whats-next/","section":"Plakar Blog","summary":"plakar v1.1.0-beta marks a major step forward, with significant performance gains, architectural simplifications, and powerful new user-facing features. From faster backups and restores to better mounting, cleaner integrations, and a more reliable execution model, this release lays solid foundations for what comes next. The beta is stable, backward compatible, and ready to be explored.","title":"Plakar v1.1.0-beta: the foundation for what’s next","type":"posts"},{"content":"","date":"11 January 2026","externalUrl":null,"permalink":"/tags/overlayfs/","section":"Tags","summary":"","title":"Overlayfs","type":"tags"},{"content":"","date":"11 January 2026","externalUrl":null,"permalink":"/tags/qcow2/","section":"Tags","summary":"","title":"Qcow2","type":"tags"},{"content":"Plakar offers a nice UI to see the content of a Kloset store.\nYou can run it locally with just one command, and a live demo is available at https://demo.plakar.io.\nplakar at \u0026lt;store\u0026gt; ui When browsing the files inside a snapshot, you can preview text files, images, videos, PDFs, and audio files directly in the UI.\nThis is extremely handy when you want to quickly inspect the contents of a backup without restoring an entire snapshot to local disk, especially when you don\u0026rsquo;t know which snapshot contains the file you\u0026rsquo;re looking for.\nIn this blog post, we\u0026rsquo;ll walk through our first research and experiments aimed at adding a PostgreSQL viewer to the Plakar UI.\nWhy a PostgreSQL viewer? # Let\u0026rsquo;s say you accidentally deleted an image from your favorite photo album. With Plakar UI, you can quickly scan through your backups, preview the images inside each snapshot, and restore the snapshot that contains the missing photo.\nNow, what if the lost data is stored inside a PostgreSQL database? The only way to check whether a snapshot contains the missing data is to restore the entire database backup and run SQL queries against it.\nWhat if we provided a PostgreSQL viewer directly inside the Plakar UI? You could connect to a snapshot, run SQL queries, and preview the results without restoring the full database, in the same way you can preview text files or images today.\nPerforming the backup # We wrote a PostgreSQL backup guide that explains how to perform a physical backup of a PostgreSQL database using pg_basebackup and Plakar.\nThe command looks like this:\n$ export PGUSER=xxx $ export PGPORT=5432 $ export PGHOST=xxx $ export PGPASSWORD=xxx $ pg_basebackup -D - -F tar -X fetch | \\ plakar at /var/backups backup -no-progress tar:///dev/stdin It assumes a PostgreSQL server running on PGHOST:PGPORT. It generates a .tar archive of /var/lib/postgresql/data and uses Plakar\u0026rsquo;s tar source importer (reading from stdin) to import the data into a Kloset store.\nRunning a Docker containerized PostgreSQL instance # The first approach to display a PostgreSQL viewer from the UI would be to restore the snapshot to a local directory and run a PostgreSQL Docker container using that directory as a data volume.\nAs explained in the PostgreSQL backup guide, you could first restore the snapshot:\n$ plakar at /var/backups restore -to ./mydir \u0026lt;snapshot_id\u0026gt; Then, start a PostgreSQL container using that directory as a data volume:\n$ docker run --rm -ti \\ --name pg \\ -v ./mydir:/var/lib/postgresql/data \\ postgres It works well, but it requires restoring the full snapshot to disk first. For large databases, this can be slow, storage-intensive, and even impossible: you may not have enough free disk space to restore the entire database.\nWhat if there was a way to run PostgreSQL directly on top of the snapshot data stored in Kloset, without restoring the full snapshot first?\nPlakar mount to the rescue # The command plakar mount allows mounting a Kloset store as a local read-only filesystem:\n$ ./plakar mount -to /mnt/mysnapshot \u0026lt;snapshot_id\u0026gt; This command is magical: it allows browsing the files inside a snapshot as if they were stored on a local disk, but files are fetched on-demand from the Kloset store.\nIt seems like the perfect fit for our PostgreSQL viewer: we could mount the snapshot containing the PostgreSQL data files, and run PostgreSQL directly on top of that mount point. Since files are fetched on-demand, only the data that PostgreSQL actually reads would be downloaded from Kloset.\nIt is particularly important not to download the entire database, because PostgreSQL data directories can be huge, and you don\u0026rsquo;t want to transfer gigabytes of data just to run a few read-only queries.\nLet\u0026rsquo;s try to run PostgreSQL on top of the mounted Kloset snapshot:\n$ docker run --rm -ti \\ --name pg \\ -v /mnt/mysnapshot:/var/lib/postgresql/data:ro postgres Ouch, it immediately fails:\n$ chmod: changing permissions of \u0026#39;/var/lib/postgresql/data\u0026#39;: Read-only file system $ chown: changing ownership of \u0026#39;/var/lib/postgresql/data\u0026#39;: Read-only file system ... PostgreSQL tries to change permissions and ownership of its data directory. Since the directory is mounted read-only, the server cannot start.\nUnfortunately, this is a fundamental problem: PostgreSQL needs write access to its data directory, and there is no fully read-only mode, even if you only want to run read-only queries.\nOverlayfs # Overlayfs provides a way to create a writable filesystem (the upper layer) on top of a read-only filesystem (the lower layer).\nIt works only on Linux, but it could be a good fit for our use case. Let\u0026rsquo;s explore this option.\nFirst, create the required directories and mount the overlay filesystem:\n$ mkdir upper workdir merged $ mount -t overlay overlay \\ -o lowerdir=/mnt/mysnapshot,upperdir=./upper,workdir=./workdir \\ ./merged Here:\n/mnt/mysnapshot is the read-only mount point created by plakar mount, ./upper is a writable directory where changes will be stored, ./merged is a combined view of both layers. Whenever a file is read from ./merged, if it exists in ./upper, it is read from there; otherwise, it is read from /mnt/mysnapshot.\nflowchart TB PG[PostgreSQL] PG --\u003e Merged[\"Data volumeoverlayfs merged dir\"] %% Read paths Merged --\u003e|\u0026nbsp;read unmodified file\u0026nbsp;| Lower[\"Kloset snapshotoverlayfs lowerdir (read-only)\"] Merged --\u003e|\u0026nbsp;read after write\u0026nbsp;| Upper[\"overlayfs upperdirwritable\"] %% Write path Merged --\u003e|\u0026nbsp;write file\u0026nbsp;| CopyUp[\"Copy file from lowerdir\"] CopyUp --\u003e Apply[\"Apply modifications\"] Apply --\u003e Upper classDef ro fill:#f6f6f6,stroke:#999; classDef rw fill:#ffffff,stroke:#333; classDef merge fill:#eef3ff,stroke:#5b7cff; classDef action fill:#fff8dc,stroke:#999,stroke-dasharray: 3 3; class Lower ro; class Upper rw; class Merged merge; class CopyUp action; class Apply action; For any write operation, the file is first copied from the lower layer to the upper layer (if it exists in the lower layer), and then the write is performed on the copy in the upper layer.\nThis looks promising: we could expose ./merged as the PostgreSQL data directory, allowing PostgreSQL to write to its files, while the original data is still read from the Kloset snapshot on-demand.\nFail… again # Let\u0026rsquo;s try running PostgreSQL on top of overlayfs:\n$ docker run --rm -ti \\ --name pg \\ -v ./merged:/var/lib/postgresql/data \\ postgres PostgreSQL starts successfully after some time, and you can run queries:\n$ docker exec -ti pg psql -U postgres -c \u0026#39;\\l\u0026#39; It seems to work, but it doesn\u0026rsquo;t. As we explained before, any write operation causes the file to be copied from the lower layer to the upper layer. A single permission change or byte update causes the entire file to be duplicated.\nThis is another fundamental limitation of overlayfs for our use case. We can\u0026rsquo;t avoid writes: PostgreSQL modifies all its data files, and even small modifications cause entire files to be copied to the upper layer.\nIt is transparent to the user, but under the hood, if the PostgreSQL data directory contains, let\u0026rsquo;s say, 100 GB of data files, then running PostgreSQL on top of overlayfs will end up copying all those 100 GB into the upper directory.\nOperating at the block level # The problem with overlayfs is that it operates at the file level: any write to a file causes the entire file to be copied to the upper layer.\nWhat if we could operate at the block level instead? That is, only the modified blocks of a file would be copied to the upper layer, while unchanged blocks would still be read from the lower layer.\nThis would be perfect for our use case: PostgreSQL modifies many files, but usually only a small portion of each file. If we could copy only the modified blocks, the amount of data copied would be much smaller.\nQcow2 # With qcow2, modified blocks are written to a new image, while unchanged blocks are read from a backing file.\nLet\u0026rsquo;s create a base qcow2 image with enough space to hold the PostgreSQL data:\n$ qemu-img create -f qcow2 base.qcow2 10G Format it, and copy the PostgreSQL data to the image:\n$ qemu-nbd --connect /dev/nbd0 ./base.qcow2 $ mkfs.ext4 /dev/nbd0 $ mkdir -p ./mnt \u0026amp;\u0026amp; mount /dev/nbd0 ./mnt $ plakar at /var/backups restore -to ./mnt \u0026lt;snapshot_id\u0026gt; # Cleanup $ umount ./mnt $ qemu-nbd -d /dev/nbd0 Now, ./base.qcow2 is a qcow2 image that contains a filesystem with the PostgreSQL data.\nHere, we had to build the base image by restoring the full snapshot. This is because we are still experimenting, but if we go down this path, we could expose Kloset data as a block device directly, avoiding the full restore step. It is a non-trivial problem, but solvable.\nNow, let\u0026rsquo;s create an overlay image using the base image as backing storage:\n$ qemu-img create -f qcow2 -b base.qcow2 -F qcow2 overlay.qcow2 $ qemu-nbd --connect /dev/nbd0 ./overlay.qcow2 $ mount /dev/nbd0 ./mnt Now, ./mnt contains the PostgreSQL data directory, backed by base.qcow2. Any read operation fetches data from base.qcow2, and write operations copy only the modified blocks from base.qcow2 to overlay.qcow2, and not the entire files.\nflowchart TB PG[PostgreSQL] PG --\u003e Merged[\"Data volumemounted filesystem\"] %% Read paths Merged --\u003e|\u0026nbsp;read unmodified block\u0026nbsp;| Base[\"base.qcow2backing image (read-only)\"] Merged --\u003e|\u0026nbsp;read modified block\u0026nbsp;| Overlay[\"overlay.qcow2copy-on-write blocks\"] %% Write path Merged --\u003e|\u0026nbsp;write block\u0026nbsp;| CopyBlock[\"Copy block from base image\"] CopyBlock --\u003e Apply[\"Apply block modifications\"] Apply --\u003e Overlay classDef ro fill:#f6f6f6,stroke:#999; classDef rw fill:#ffffff,stroke:#333; classDef merge fill:#eef3ff,stroke:#5b7cff; classDef action fill:#fff8dc,stroke:#999,stroke-dasharray: 3 3; class Base ro; class Overlay rw; class Merged merge; class CopyBlock action; class Apply action; Let\u0026rsquo;s run PostgreSQL:\n$ docker run --name pg --rm -ti -v ./mnt:/var/lib/postgresql/data postgres PostgreSQL starts successfully, and queries work as expected.\nThis time, overlay.qcow2 grows only when blocks are modified, and the growth is significantly smaller than with overlayfs.\nConclusion # Providing a PostgreSQL viewer inside the Plakar UI is an interesting idea that can significantly improve the user experience when dealing with database backups.\nWe first attempted to restore a snapshot to disk and run PostgreSQL on top of it, but this approach is not feasible for large databases: it requires transferring the entire database backup before any query can run, which is slow and storage-intensive.\nThen, we explored running PostgreSQL directly on top of snapshot data mounted with plakar mount. However, since PostgreSQL requires write access to its data directory, this approach failed.\nNext, we experimented with overlayfs to provide a writable layer on top of the read-only Kloset mount. While this allowed PostgreSQL to start, the file-level copy-on-write behavior caused a full copy of the database, defeating the purpose of on-demand data fetching.\nFinally, we found that using qcow2 images provided a better solution. By creating a qcow2 overlay image backed by a base image containing the PostgreSQL data, we were able to run PostgreSQL with block-level copy-on-write semantics. This approach significantly reduced the amount of data copied during write operations, making it a promising path toward implementing a native PostgreSQL viewer in Plakar. There is still a lot of complex work to be done to make this solution production-ready, but the initial experiments are encouraging.\n","date":"11 January 2026","externalUrl":null,"permalink":"/posts/2026-01-11/researching-a-postgresql-viewer-for-plakar/","section":"Plakar Blog","summary":"An R\u0026D exploration of adding a PostgreSQL viewer to the Plakar UI, comparing filesystem-based approaches with block-level copy-on-write using qcow2.","title":"Researching a PostgreSQL viewer for Plakar","type":"posts"},{"content":"","date":"11 January 2026","externalUrl":null,"permalink":"/tags/ui/","section":"Tags","summary":"","title":"UI","type":"tags"},{"content":"","date":"7 January 2026","externalUrl":null,"permalink":"/tags/linux-foundation/","section":"Tags","summary":"","title":"Linux-Foundation","type":"tags"},{"content":"","date":"7 January 2026","externalUrl":null,"permalink":"/authors/nestor/","section":"Authors","summary":"","title":"Nestor","type":"authors"},{"content":"Today marks a significant step forward for our project and our community. We are thrilled to announce that Plakar has officially joined the Linux Foundation and the Cloud Native Computing Foundation (CNCF) as a member. This is an important milestone in our mission to establish an Open Standard for Resilience.\nWe founded Plakar with a specific DNA: a convergence of hyperscale operations and uncompromising security engineering, rooted in the OpenBSD philosophy. By joining these foundations, we commit to bringing this rigor to the global Open Source ecosystem. We believe that true resilience relies on transparency. You cannot claim to secure critical infrastructure if the format protecting your data remains a proprietary secret or a black box.\nThis membership is not just a formality; it strengthens our pledge to three core principles that drive our engineering every day.\nRadical Openness: The format storing your data must be open, documented, and built to outlive the tools that created it. We are moving away from proprietary silos to a portable, content-addressed primitive. Zero-Trust by Design: In an adversarial environment, security cannot be an afterthought. Our core engine, Kloset, and our archive format, PTAR, are designed to ensure you never have to trust the infrastructure with your encryption keys. Ecosystem Integration: We are joining the table to collaborate with the industry leaders. Our goal is to ensure Plakar integrates seamlessly with the cloud-native stack, bridging the gap between modern workloads and traditional infrastructure without friction. We are building the critical last line of defense, and we are doing it in the open. This membership is a promise that Plakar will evolve with the strict governance and transparency required by the modern cloud-native stack. We invite every developer, engineer, and operator to be part of this journey.\nJoin the movement on GitHub and Discord\nWith love. ❤️\nThe Plakar team\n","date":"7 January 2026","externalUrl":null,"permalink":"/posts/2026-01-07/plakar-joins-the-linux-foundation-and-cloud-native-computing-foundation/","section":"Plakar Blog","summary":"We are proud to announce that Plakar has officially joined the Linux Foundation and the CNCF as a member, marking a pivotal step in establishing an Open Standard for Resilience.","title":"Plakar joins the Linux Foundation and Cloud Native Computing Foundation","type":"posts"},{"content":"To celebrate this momentum on the final day of 2025, we are incredibly proud to announce the immediate availability of Plakar Enterprise for AWS. As the year draws to a close, the Plakar team looks back with immense pride on twelve months of exceptional velocity. In less than a year, we have transitioned from an ambitious vision to a proven technical reality.\nWhy we think legacy backup is broken # We founded Plakar based on a critical observation: traditional backup architectures have hit a structural dead end. With data volumes exploding, often doubling every three to four years, and end-to-end encryption becoming non-negotiable to counter ransomware and massive data leaks, legacy tools are failing.\nThey cannot maintain storage efficiency once data is encrypted, forcing enterprises into an impossible choice between security and cost. Furthermore, the fragmentation of data across on-premise, cloud, and SaaS environments has created a dangerous blind spot in governance, leaving IT leaders unsure of what is truly protected.\nHow we spent 2025: Solving the efficiency paradox # We dedicated 2025 to solving this impossible equation. Our engineering teams developed Kloset, our open-source storage engine, and PTAR, our universal archive format.\nThese technologies finally reconcile high-density storage efficiency with zero-knowledge encryption, ensuring data is never exposed in plain text to the infrastructure, while guaranteeing instant access to data for advanced usage and efficient restore. This technical rigor has united a vibrant community of 600 engineers on Discord around several open source releases, whose feedback has been vital in validating our assumptions and hardening our product.\nOur vision for a new Open Standard # Our mission remains unchanged: to establish the new Open Standard for Data Resilience. We are not just building another backup tool; we are decoupling data protection from the underlying storage infrastructure.\nOur promise is simple, \u0026ldquo;Backup anything. Store anywhere. Restore everywhere,\u0026rdquo; while guaranteeing that resilience operations can be delegated to third parties, whether internal or external to your company, without ever surrendering access to your data.\nA major milestone: Bringing Plakar to AWS # The release of Plakar Enterprise for AWS marks a major milestone toward this vision. Designed as a hardened virtual appliance, this version acts as a layer of intelligence and unified governance.\nIt enables enterprises to secure their native AWS environments while maintaining total visibility and agnostic control over their resilience posture, effectively bridging the governance gap that threatens large infrastructures.\nJoin the beta and help us shape the future # Starting today, the Enterprise version is available on the AWS Marketplace in closed beta.\nWe invite our community, our first clients, and our strategic Design Partners to contact us to gain access before a public version is released during January. We are eager to hear your feedback to continue refining our standard.\nThis launch is just the beginning; numerous new features are expected in the coming weeks for both the Enterprise and Community versions, accelerating our transition from a single tool into a comprehensive Resilience-as-a-Service ecosystem. We are also actively working on expanding our cloud footprint: versions for other providers will follow rapidly, with OVHcloud and GCP next in line.\nFrom Nantes, Amsterdam, Prague, Bordeaux, Pordenone, Saint-Malo, Vieux - Europe - December 31, 2025 with love. ❤️\n","date":"31 December 2025","externalUrl":null,"permalink":"/posts/2025-12-31/announcing-plakar-enterprise-for-aws-preview/","section":"Plakar Blog","summary":"We are proud to announce the immediate availability of Plakar Enterprise for AWS, bringing Cloud-Native Resilience and Zero-Trust security to your VPC.","title":"Announcing Plakar Enterprise for AWS (Preview)","type":"posts"},{"content":"","date":"31 December 2025","externalUrl":null,"permalink":"/tags/cloud/","section":"Tags","summary":"","title":"Cloud","type":"tags"},{"content":"","date":"31 December 2025","externalUrl":null,"permalink":"/tags/release/","section":"Tags","summary":"","title":"Release","type":"tags"},{"content":"","date":"31 December 2025","externalUrl":null,"permalink":"/tags/resilience/","section":"Tags","summary":"","title":"Resilience","type":"tags"},{"content":"","date":"30 November 2025","externalUrl":null,"permalink":"/tags/backup/","section":"Tags","summary":"","title":"Backup","type":"tags"},{"content":"","date":"30 November 2025","externalUrl":null,"permalink":"/tags/build/","section":"Tags","summary":"","title":"Build","type":"tags"},{"content":"","date":"30 November 2025","externalUrl":null,"permalink":"/tags/hooks/","section":"Tags","summary":"","title":"Hooks","type":"tags"},{"content":"Our next major stable version, v1.1.0, is planned for December and will bring many significant improvements. However, we recently fixed an issue important enough to justify a minor release ahead of schedule.\nThe bug itself is unlikely to occur, it requires a specific and unfortunate timing, but since at least one user encountered it, we decided to reduce the risk of others hitting it by providing this bugfix release early.\nWe strongly encourage everyone to update to the latest version and run the new command from the machine you use to perform backups:\nplakar repair If anything needs to be corrected, the tool will let you know and prompt you to apply the fix with:\nplakar repair -apply Even if everything is already correct, as it will be for the vast majority of users, running the command is a simple, safe step to ensure your repository is fully consistent while v1.1.0 is being finalized.\nGet it now ! # Instructions on how to download and install are available in the download section!\nAvoid possible state desynchronization # Discord user eau reported a situation where snapshots looked perfectly fine on the machine that created them, but not on another machine. Since plakar check confirmed the data was valid on at least one machine, we quickly identified this as a state-synchronization mismatch, not a data issue.\nWe traced the cause to a small logic bug that could, in rare circumstances, let a machine record a snapshot in its local state before the corresponding remote state became visible to others. This is now fully resolved. We introduced a two-stage commit that guarantees the remote state is updated before the snapshot appears locally, eliminating the possibility of desynchronization.\nHow likely is it that you hit it? # The bug could only occur if, during a backup, every single write to the store succeeded except the very last one, an extremely unlucky sequence. Still, since it did happen once, we recommend updating and running:\nplakar repair from the machine that performs your backups. In nearly all cases, it will simply report that nothing needs to be done, but it’s a quick and safe check.\nEven though it was not needed in this case, it\u0026rsquo;s interesting to note that Plakar’s storage format also provides resilience: as long as the data is present in the repository, there are multiple ways to reconstruct state information if necessary.\nImproved memory usage for integrations # We also identified and fixed two memory-related issues in go-kloset-sdk, the library used by all non-builtin integrations (such as SFTP, S3, and others).\n1. Storage API memory leak A leak in the storage API caused unnecessary memory growth whenever a command read data from a store. This affected all third-party integrations and could lead to excessive RAM usage during operations like listing, checking, or restoring snapshots.\n2. Large buffer retention during restores A second issue caused large memory buffers to be kept alive when checking or restoring snapshots, again only when using non-builtin integrations. This meant that large snapshots could trigger unexpectedly high memory usage on S3 or SFTP backends.\nBoth problems were easy to miss since they only impacted external integrations and required working with large snapshots to become noticeable. However, after recent user reports, we investigated and resolved both issues.\nTo benefit from the fix, update plakar to v1.0.6 then run plakar pkg rm and plakar pkg add for each of the integrations you use:\n$ plakar pkg rm s3 $ plakar pkg add s3 This will fetch the new version of the integration, linked against the corrected go-kloset-sdk.\nIf you’ve been experiencing high memory usage on S3 or SFTP, this update should make you very happy 🙂\nWhat’s next? # Over the past two months, most of our attention has been dedicated to our upcoming plakar enterprise product, a substantial milestone that deserves its own dedicated post. We’ll share more about it very soon.\nAt the same time, we’ve been making significant progress on plakar community, but some of the largest improvements still require polishing and thorough testing before they’re ready for release. We weren’t comfortable leaving the state-synchronization bug open for several more weeks, and given that we had also made substantial improvements to memory usage, we decided to ship v1.0.6 as a minor release. This provides an immediate fix for the state issue as well as the high-memory usage problems some users experienced with SFTP and S3.\nLooking ahead, v1.1.0 is scheduled for release in December. It will include a major revamp of the caching layer along with several additional optimizations and reworks. Until then, we strongly recommend updating to v1.0.6 to benefit from these reliability and performance improvements.\nFull Changelog # 👉 v1.0.5\u0026hellip;v1.0.6\nAs always, feedback is welcome: try it out, break things, and tell us what happens!\n","date":"30 November 2025","externalUrl":null,"permalink":"/posts/2025-11-30/release-v1.0.6-bugfix-and-memory-usage-improvement/","section":"Plakar Blog","summary":"v1.0.6 brings a few bugfixes and huge memory usage improvements.","title":"Release v1.0.6 — Bugfix and memory usage improvement","type":"posts"},{"content":"Hot on the heels of v1.0.4, we’re excited to ship Plakar v1.0.5 — a release packed with build refinements, pipeline tuning, hook support for backups, and smaller but meaningful quality-of-life updates across the board.\nThis version sets the stage for smoother integrations, better developer ergonomics, and more flexible automation.\nGet it now ! # Instructions on how to download and install are available in the Download section !\nBuild \u0026amp; Packaging Improvements # We’ve improved the build process to make distribution cleaner and more robust:\n✅ Fixed Homebrew packaging to ensure a smooth experience on macOS (#1684) 🪟 Added Windows builds for broader platform support (#1685) 📦 Multiple dependency bumps, including: golang.org/x/tools, golang.org/x/mod google.golang.org/grpc github.com/spf13/viper github.com/charmbracelet/bubbletea github.com/go-playground/validator/v10 These changes ensure a more consistent, up-to-date development environment across all platforms.\nUI \u0026amp; Documentation Updates # New social links and documentation references have been added (#1706). Plakar UI has been synced to the latest main@4a02561 revision (#1710), with simplified asset serving (#1718). CI was fixed to properly update the UI as part of the build (#1709). Manual pages were enhanced to better describe the import command (#1730). Together, these improvements polish the interface and documentation, making Plakar more accessible and discoverable.\nPipeline \u0026amp; Concurrency Tuning # Since turning backup into a pipeline, we’ve adjusted concurrency levels to better align with the new architecture (#1713).\nThis change improves stability and resource usage during heavy operations, paving the way for further optimizations in future versions.\nBackup Hooks \u0026amp; Sync Enhancements # A key new feature in v1.0.5 is hook support for backup commands:\nAdded pre-hook and post-hook CLI flags to plakar backup (#1727) Hooks now work seamlessly on Windows too (#1741) Added fail hooks, allowing users to trigger custom actions when backups fail (#1743) Introduced support for passphrase_cmd during sync operations (#1744) These additions unlock more powerful automation and integration scenarios, letting you plug Plakar more deeply into existing workflows.\nMaintenance \u0026amp; Internal Refinements # Other notable changes include:\nImproved type safety in DecodeRPC (#1721) Clearer messaging around grace periods (#1717) Better login requirement clarifications (#1715) Enhanced handling for missing locations (#1716) Removed unused code paths and simplified plugin arguments (#1724, #1726, #1729) Added a cache-mem-size parameter for finer cache control (#1738) Miscellaneous bug fixes, including proper error handling for missing stores (#1725) and filter overrides (#1737) These refinements make the codebase leaner, more predictable, and easier to maintain.\nNew Contributors # A warm welcome to @pata27, who made their first contribution in #1725 🎉\nWe are awarding him this avatar (S stands for Superpata, just so you know):\nFull Changelog # 👉 v1.0.4\u0026hellip;v1.0.5\nThis release may not be as headline-grabbing as v1.0.4, but it’s a critical stepping stone — tightening the bolts, refining workflows, and enabling more flexibility for power users.\nAs always, feedback is welcome: try it out, break things, and tell us what happens!\n","date":"15 October 2025","externalUrl":null,"permalink":"/posts/2025-10-15/release-v1.0.5-refinements-hooks-build-improvements/","section":"Plakar Blog","summary":"v1.0.5 is here! This release focuses on build improvements, UI updates, smarter pipelines, new hook capabilities, and various maintenance enhancements.","title":"Release v1.0.5 — Refinements, Hooks \u0026 Build Improvements","type":"posts"},{"content":"Here is a list of common false assumptions about backup that I’ve heard repeatedly over the past year from discussions with engineers, CTOs, and sysadmins across various industries. These misconceptions often sound reasonable, but they create a false sense of safety until reality strikes.\nIf a backup finished successfully, it can be restored\nA backup that completes without errors doesn\u0026rsquo;t guarantee it can be restored.\nMost failures happen during recovery due to corruption, misconfiguration, or missing pieces.\n(Gitlab incident) In 2017, a maintenance mistake wiped the primary database and multiple backups failed validation, forcing a restore that lost about six hours of production data.\nRAID, replication, or snapshots are backups\nThey are not. These mechanisms protect availability, not recoverability. They replicate corruption, deletions, and ransomware with impressive speed.\nReplication synchronizes data including accidental deletions or corruptions. Backups preserve history and offer rollback.\n(Meta) Meta documented “silent data corruptions” from faulty CPUs that replication dutifully propagated across systems, proving redundancy isn’t the same as recoverability.\nCloud providers back up my data\nThey don’t. All cloud providers offer at the best durability and redundancy, not backups. You are responsible for protecting your own data.\nThey all use a shared responsibility model that clearly states that backups are your job and implicitly (or clearly) state that you should backup you data out of their scope.\n(Google cloud UniSuper incident) In 2024, a Google Cloud provisioning misconfiguration deleted UniSuper’s entire GCVE environment across regions—service was down for two weeks until backups were rebuilt.\nThe database files are enough to recover the database Not without transaction logs or consistency coordination. Copying raw files doesn\u0026rsquo;t guarantee usable data.\n(Microsoft TechCommunity: top 5 reasons why backup goes wrong)\nMicrosoft’s guidance highlights real-world restores that fail because required logs/consistency points weren’t captured—even when raw database files existed.\nOur backups are safe from ransomware\nIf they are accessible from the network, they are a primary target. Ransomware hits backups first. Isolation and immutability are critical.\nTo prevent data leakage, backups should be encrypted, but you can still lose access to your data if the ransomware also encrypts or deletes your backups.\n(PerCSoft / DDS Safe) A ransomware attack on the dental-backup provider encrypted the cloud backups of hundreds of practices, leaving many without a usable recovery point.\nA well-configured S3 bucket doesn\u0026rsquo;t require backup\nEven a perfectly configured S3 bucket - with Versioning, Object Lock (Compliance mode), and MFA Delete - is not a backup.\nAWS itself advises creating immutable copies in an isolated secondary account to protect against breaches, misconfigurations, compromised credentials, or accidental deletions. The official architecture (AWS Storage Blog, 2023) explicitly shows that replication and object-lock alone do not protect you from logical corruption or account compromise: you must replicate to a separate, restricted account to keep an independent, immutable copy.\nIn practice, replication can also amplify failures or ransomware attacks if not isolated: when the source data is encrypted or deleted, the replication faithfully propagates the damage to the destination. This is why AWS recommends automated suspension of replication when suspicious PUT or DELETE activity is detected a classic anti-ransomware safeguard.\nS3 is designed for durability, not recoverability. A “well-configured bucket” ensures data isn’t lost due to hardware failure, but it won’t help you recover from a logic error, a bad IAM policy, or an API key compromise. True protection requires an independent, immutable backup ideally in another account or region, with Object Lock compliance and strict key isolation.\n(AWS Blog: Modern Data Protection Architecture on Amazon S3, Part 1)\nEncryption in transit and at rest is not end-to-end security for backup\nReal E2E means client-side encryption with customer-held keys. If the backup server or its KMS can decrypt, an attacker who compromises it can too.\nCVE-2023-27532 shows the risk: an unauthenticated actor could query Veeam Backup Service and pull encrypted credentials from the config database, then pivot to hosts and repositories. It was exploited in the wild.\n(CISA KEV: CVE-2023-27532) • (BlackBerry on Cuba ransomware) • (Group-IB on EstateRansomware)\nIncremental backups are always safer and faster\nNot always. Long incremental chains rely on an index/catalog; if it’s corrupted or unavailable, the chain becomes unusable—one bad link can break the whole sequence.\nExample Commvault: when an Index V2 becomes corrupted, it’s marked Critical and, on the next browse/restore, Commvault rebuilds only from the latest cycle, making intermediate incremental points unavailable (common error: “The index cannot be accessed”). This can happen silently if the index is corrupted but still readable, leading to unnoticed data loss until a restore is needed.\n(Commvault docs – Troubleshooting Index V2) - (Commvault Community – “The index cannot be accessed”)* A daily backup is enough\nFor most modern systems, losing 23 hours of data is not acceptable. Recovery Point Objectives must match business needs.\nWhy: in many businesses, one day of irreversible data loss ≈ one full day of revenue (orders, invoices, subscriptions, transactions that can’t be reconstructed), plus re-work and SLA penalties. For mid-to-large companies, that can quickly reach millions of euros.\nRule-of-thumb:\nCost of a 24h RPO ≈ (Daily net revenue) + (Re-entry/reconciliation labor) + (SLA/chargebacks) + (churn/opportunity loss).\n(Gitlab incident) GitLab’s postmortem shows how relying on a single daily point risks losing an entire day’s business activity in one incident.\nBackup storage will always be available\nStorage fills up, disks fail, and credentials expire. Many backup systems stop quietly when that happens.\nWhy: Capacity: backup jobs commonly fail with “There is not enough space on the disk,” and operations like synthetic fulls/merges require extra temporary space (so “TBs free” can still be insufficient).\nIndex/metadata growth: index restores can balloon and fill disks, blocking browse/restore and further jobs (Commvault Index Restore filling the index disk; guidance on index cache pressure). Expired credentials/tokens: cloud backups fail when AWS tokens or Azure SAS credentials expire (e.g., S3 ExpiredToken, SAS token expiry breaks backup-to-URL). Backup is an IT problem\nIt\u0026rsquo;s not. It\u0026rsquo;s a business continuity and risk management concern. Recovery priorities should be defined at the business level.\n(Ransomware attack shutters 157-year-old Lincoln College)* Help us debunk these myths by sharing your own experiences and insights in this reddit thread: Reddit thread\nMost backup incidents go underreported: for obvious reasons, vendors and affected organizations rarely disclose full details. All the more reason to master the fundamentals (RPO/RTO, isolation, immutability, key separation) and to regularly test restores don’t wait for public post-mortems to learn.\nWe are building Plakar as an Open Source project to help everyone protect their data effectively and cover all these bases.\n","date":"17 September 2025","externalUrl":null,"permalink":"/posts/2025-09-17/falsehoods-engineers-believe-about-backup/","section":"Plakar Blog","summary":"Falsehoods Engineers belief about backup","title":"Falsehoods Engineers belief about backup","type":"posts"},{"content":"","date":"17 September 2025","externalUrl":null,"permalink":"/authors/jmangeard/","section":"Authors","summary":"","title":"Jmangeard","type":"authors"},{"content":"","date":"16 September 2025","externalUrl":null,"permalink":"/tags/archive/","section":"Tags","summary":"","title":"Archive","type":"tags"},{"content":"","date":"16 September 2025","externalUrl":null,"permalink":"/tags/backups/","section":"Tags","summary":"","title":"Backups","type":"tags"},{"content":"","date":"16 September 2025","externalUrl":null,"permalink":"/tags/integrations/","section":"Tags","summary":"","title":"Integrations","type":"tags"},{"content":"It’s been a while — but we haven’t been idle.\nToday we’re proud to announce Plakar v1.0.4, a stable release that reflects months of refinement, community input, and nearly 2,000 commits of engineering effort.\nThis release is more than just an update — it’s a milestone. It introduces major performance boosts, redefines how integrations are delivered, and lays down the foundation for a new class of features that will shape Plakar’s future.\nPre-packaged binaries: install without friction # Building from source has always been a barrier for many users — whether due to missing toolchains, mismatched dependencies, or simply the time it takes.\nStarting today, that barrier is gone.\nWe now ship pre-packaged binaries for popular systems and distributions:\n.deb for Debian/Ubuntu .rpm for Fedora, CentOS, RHEL .apk for Alpine Linux Plus static tarballs for everything else At launch, you can grab them from our GitHub releases page.\nLater this week, we’ll go one step further: official package repositories. This means you’ll be able to install or update Plakar with a single apt, yum, or apk command — keeping your deployments simpler, cleaner, and always up-to-date.\nThis is a big step toward making Plakar accessible to everyone, everywhere.\nInitial Windows support # We are bringing initial support for Windows.\nThe only limitation that we know of is that agent and scheduler aren\u0026rsquo;t supported, as unlike unix-like systems these require to implement Windows Service for background tasks, which is something we didn\u0026rsquo;t get the time to complete.\nSo you can run plakar on Windows, perform backups, checks, restores and even run the UI, but you can\u0026rsquo;t complete two operations in parallel at this time.\nRegardless, it is a huge milestone considering that our previous version didn\u0026rsquo;t built on Windows.\nIntegrations as plugins: leaner, faster, more flexible # One of the biggest changes in this release is the new plugin system.\nIntegrations — for storage, source and destinations — are no longer tied directly into the core of Plakar.\nInstead, they are delivered as independent plugins that you can install on demand:\nplakar pkg add \u0026lt;integration\u0026gt; ie: plakar pkg add s3 plakar pkg add sftp ... This shift brings three major benefits:\nLightweight core — Plakar ships leaner, with fewer dependencies baked in. Independent releases — plugins can evolve and be updated faster, without waiting for a full Plakar release. Extensibility — building and distributing new integrations becomes easier, encouraging contributions from the community. With v1.0.4, we’re officially rolling out this plugin system — and many integrations (S3, SFTP, GCP, IMAP, FTP, \u0026hellip;) are already available.\nSmarter agent: auto-spawn \u0026amp; teardown # Concurrency in Plakar requires a local cache. Until now, this meant manually starting the plakar agent before running commands. Forgetting to do so was one of the most common pitfalls — and a frustrating one.\nThat era is over.\nFrom now on, the agent manages itself:\nIt auto-spawns when needed. It auto-tears down after a short idle period. It never hangs around in the background unless required. This might sound like a small change, but in practice it completely removes friction. It’s the first step toward a future where you don’t have to think about concurrency or agents at all — Plakar will simply handle it for you.\nCache improvements: smarter, not noisier # Caching is at the heart of Plakar’s speed. But caching too aggressively could sometimes mean wasted resources in large datasets.\nWe’ve tuned and refined our strategy.\nThe new cache layer is smarter, less intrusive, and more accurate. For example, in our test Korpus with over 1 million resources, the new approach reduced unnecessary lookups while improving accuracy and keeping memory usage under control.\nThe result: faster operations that feel lighter and more predictable.\nThere are still corner cases where we cache too agressively and that we need to work on, but v1.0.4 already halves cache storage space for most setups and performs far less disk hits for cache lookups and writes, putting less pressure on disk I/O.\nPerformance boosts: speed everywhere # We’ve poured countless hours into profiling and optimizing every corner of Plakar.\nSome of the highlights:\nFaster indexing of snapshots More efficient filesystem traversal More efficient file content access Optimized deduplication pipelines Lower memory footprint across the board These changes combine to make commands like plakar backup, plakar check, and plakar restore noticeably faster — especially in large-scale environments.\nDetails will follow in a dedicated technical post, but the difference is already tangible: Plakar simply feels faster. Some commands get a x2 boost, others up to a x10 boost, due to much improved data access patterns.\nPolicy support: control how data lives # With v1.0.4, Plakar introduces policy definitions.\nThis means you can now define rules that govern how data is stored, managed, and eventually pruned.\nFor now, policies are simple — but they open the door to advanced features such as:\nTiered storage (e.g. SSD for hot data, object storage for cold data) Automatic pruning based on age, size, or frequency Smarter scheduling of backups and syncs Think of policies as the scaffolding for enterprise-grade data lifecycle management inside Plakar.\nCurrently policies will allow you to do things like keep 2 backups per month over the last three months + 5 backup per week over the last four weeks + 3 per day over the last 2 days:\n$ plakar prune -days 2 -per-day 3 -weeks 4 -per-week 5 -months 3 -per-month 2 But it also support filering on all our snapshot locating options, like filtering on tags:\n$ plakar prune -tags finance -per-day 5 UI improvements: details that matter # Not everything in v1.0.4 is flashy — some of the most impactful changes are in the everyday experience.\nFor this release, we went beyond functional tweaks: we partnered with a design studio to rethink Plakar’s look and feel. The result is a cleaner, more consistent, and more approachable interface that makes everyday usage smoother.\nSome highlights include:\nRefined layouts that present information in a way that feels more natural and easier to scan Consistent typography and spacing, improving readability for long-running commands Clearer visual hierarchy in progress reporting, so you can immediately see what matters Better error messages, rewritten to be actionable and friendly rather than cryptic The collaboration brought in fresh eyes from outside the engineering team, ensuring the interface wasn’t just technically correct but also pleasant to use.\nThese changes may feel small individually, but together they deliver a more polished user experience. Plakar now feels less like a raw developer tool and more like a thoughtfully designed product — without losing the power and flexibility that make it unique.\nYou can have a peak at it on our demo website, but here\u0026rsquo;s a few screenshots for you.\nCall to action # And this is just the beginning.\nWe’re already working on the next wave of features: richer integrations, deeper policy engines, more UI improvements, and better cross-platform support.\nBut we need you:\n⭐ Star us on GitHub 💬 Join the chat on Discord 🚀 Try the new release, push it to its limits, and tell us what you think of it ! This is your Plakar as much as it is ours. Let’s keep building it together :-)\n","date":"16 September 2025","externalUrl":null,"permalink":"/posts/2025-09-16/release-v1.0.4-a-new-milestone-for-plakar/","section":"Plakar Blog","summary":"Plakar v1.0.4 introduces pre-packaged binaries, a new plugin system for integrations, smarter caching, policy-based lifecycle management, UI refinements, and major performance boosts — marking a milestone release for the platform.","title":"Release v1.0.4 — A new milestone for Plakar","type":"posts"},{"content":"You didn\u0026rsquo;t ask. We still listened. And now — it’s here 🎉\nNotion backup integration is live ! # Notion powers countless personal and team workflows — but backing it up?\nThat’s another story.\nLet’s be honest: Backing up Notion is painful — clunky exports, API weirdness, and lots of manual overhead. As a result, most people don’t even have backups, and for a tool so central to everyday work, that’s… a bit scary.\nWhat if your Notion data gets accidentally wiped out?\nbrrrrrr, that changes today.\nNotion Integration GitHub\nWith our new Notion integration, you can now:\nImport your Notion pages as versioned snapshots Maintain a local copy of all your Notion content All using the same Plakar tooling you already know and love 💜\nAttention # This feature only works on our development branch for the time being, you can give it a try by installing our latest devel release:\n$ go install github.com/PlakarKorp/plakar@v1.0.3-devel.889b4b6 Install in seconds # Since this is still a testing version, we don\u0026rsquo;t provide pre-built binaries yet, but you can easily build the plugin and install from source as plakar comes with its own tooling.\nTyping the following command will fetch the latest version of the integration and build a plugin out of it:\n$ plakar pkg build notion /usr/bin/make -C /tmp/build-notion-v0.1.0-devel.b66af0a-644909591 ea7b3ad6: OK ✓ /manifest.yaml ea7b3ad6: OK ✓ /notion-importer ea7b3ad6: OK ✓ /notion-exporter ea7b3ad6: OK ✓ / Plugin created successfully: notion_v0.1.0-devel.b66af0a_darwin_arm64.ptar The resulting file, /tmp/notion_v0.1.0-devel.b66af0a_darwin_arm64.ptar, is a plugin that\u0026rsquo;s exactly like the ones that will be pre-built and distributed by us, ready to be installed:\n$ plakar pkg add ./notion-v0.1.0-devel.b66af0a_darwin_arm64.ptar You can verify that it\u0026rsquo;s properly installed (see how notion appears now):\n$ plakar version plakar/v1.0.3-devel importers: fs, ftp, notion, s3, sftp, stdin, tar, tar+gz, tgz exporters: fs, ftp, notion, s3, sftp, stderr, stdout klosets: fs, http, https, ptar, ptar+http, ptar+https, s3, sftp, sqlite And\u0026hellip; that\u0026rsquo;s all you have to do !\nSetup the Notion side # Notion also provides a system of integrations to allow applications to interact with it, so before you perform you first backup you need to create an integration at Notion:\nThis will provide you with a secret token that you need to keep for the plakar configuration.\nThen, for each page you want plakar to have access to, you will need to go to the upper-right menu and attach your integration there:\nThis is tedious, but hey\u0026hellip; either we missed something or it\u0026rsquo;s like they didn\u0026rsquo;t want data to be extracted that easily ;-)\nSetup the Plakar side # Once everything is ready at Notion, you need to provide plakar with a source configuration for it to know where to fetch the data.\n$ plakar source set mynotion notion:// \\ token=ntn_1234567890123456789012345678901234567890123456 Reload the agent configuration (this step will soon become optional):\n$ plakar agent reload \u0026hellip; and run your backup !\n$ plakar backup @mynotion /Users/gilles/.cache/plakar/plugins/notion_v0.1.0_darwin_arm64 30b99763: OK ✓ /e2fdfe56-536a-4172-8974-78b14b351df7/page.json 30b99763: OK ✓ /e2fdfe56-536a-4172-8974-78b14b351df7/9ccb9414-066f-4743-a694-6589cce600b6/page.json 30b99763: OK ✓ /e2fdfe56-536a-4172-8974-78b14b351df7/8ea2b894-7caa-4f57-8695-803e3c09369c/page.json [...] You\u0026rsquo;re done.\nTo restore, you do the opposite by providing a destination configuration, however Notion public API doesn\u0026rsquo;t let you restore directly to a workspace, so first:\ncreate an empty page and make sure you have write access get ID from URL: https://www.notion.so/1ea782d6899380dd96c2f88f20f68635 attach the notion integration to that page, as explained in the previous section Then you can do the plakar setup as was done for backup, but now for the destination side:\n$ plakar destination set mynotion notion:// \\ token=ntn_1234567890123456789012345678901234567890123456 \\ rootID=1ea782d6899380dd96c2f88f20f68635 Reload the agent config:\n$ plakar agent reload \u0026hellip; and restore !\n$ plakar restore -to @mynotion 30b99763 30b99763: OK ✓ /e2fdfe56-536a-4172-8974-78b14b351df7/page.json 30b99763: OK ✓ /e2fdfe56-536a-4172-8974-78b14b351df7/9ccb9414-066f-4743-a694-6589cce600b6/page.json 30b99763: OK ✓ /e2fdfe56-536a-4172-8974-78b14b351df7/8ea2b894-7caa-4f57-8695-803e3c09369c/page.json [...] There. you. go\u0026hellip;\nNote that you can also restore to a local directory or an alternate target, the restored data will maintain the original structure.\n$ plakar restore -to /tmp/notion-backup 30b99763 30b99763: OK ✓ /e2fdfe56-536a-4172-8974-78b14b351df7/page.json 30b99763: OK ✓ /e2fdfe56-536a-4172-8974-78b14b351df7/9ccb9414-066f-4743-a694-6589cce600b6/page.json 30b99763: OK ✓ /e2fdfe56-536a-4172-8974-78b14b351df7/8ea2b894-7caa-4f57-8695-803e3c09369c/page.json [...] Now let\u0026rsquo;s run the UI:\n$ plakar ui Caveats # Due to the fact that the API was not designed to make it easy to extract or inject data, implementing backups for Notion is a circle of hell, particularly if you want browsable snapshots as we do.\nHere\u0026rsquo;s a depiction, and we\u0026rsquo;re somewhere near the middle now:\nThe integration is not the fastest to say the least, however we have ways to improve that as we\u0026rsquo;ve mainly focused on making it work without any optimization whatsoever.\nBackup works fine once the pages have been configured, including medias that are present in them, however at the current time restore can\u0026rsquo;t restore the medias within Notion: they are part of the backups but can\u0026rsquo;t be pushed back.\nThe reason for that is that the Notion API wants the images to be hosted somewhere else, so we can provide them a link and they can pull from it. Obviously, we can\u0026rsquo;t push the content of your backups to some hosting platform, so we still need to think of a creative way to tackle this\u0026hellip; at least they are present in your backups, so there\u0026rsquo;s that.\nYour usual call to action # We’re shipping early to get your feedback — don’t hesitate to file issues or contribute patches.\n⭐ Star us on GitHub 💬 Join the chat on Discord 🚀 Stay tuned for another integration dropping tomorrow ","date":"17 July 2025","externalUrl":null,"permalink":"/posts/2025-07-17/back-up-notion-yes-you-can./","section":"Plakar Blog","summary":"With our new Notion integration, plakar can now snapshot and restore workspaces directly — docs, databases, and more. No hacks. Just data.","title":"Back up Notion? Yes, you can.","type":"posts"},{"content":"Can\u0026rsquo;t really say it’s been that long\u0026hellip; but we’re back with exciting new features 😄\nWhat\u0026rsquo;s an integration ? # Integrations in plakar are designed to make backups easier by extending its ability to handle new resources.\nThey can:\nProvide storage connectors (e.g. host a Kloset store on S3, FS, etc.) Provide source connectors (e.g. import from FTP, IMAP, local FS…) Provide destination connectors (e.g. restore to SFTP, IMAP, etc.) You can mix and match them however you like. Backup your local filesystem to S3, restore it to a remote SFTP — no sweat.\nBonus: We plan to expand integrations even further — think snapshot data analyzers (GDPR tagging, email detection…), custom data viewers (SQL query explorers!), and more.\nIntegrations are intentionally lightweight — some of ours were built in under an hour with fewer than 100 lines of Go.\nWe’re committed to keeping the barrier to entry low, so anyone can create their own with ease — whether you\u0026rsquo;re contributing open source integrations to help grow the plakar ecosystem, or building commercial ones to monetize your work.\nAttention # This feature only works on our development branch for the time being, you can give it a try by installing our latest devel release:\n$ go install github.com/PlakarKorp/plakar@v1.0.3-devel.455ca52 Introducing go-kloset-sdk # We’ve made writing integrations simple — but handling the underlying plumbing for plugins (like GRPC over socketpairs, IPC management, and process orchestration)\u0026hellip; not so much. It’s messy, error-prone, and frankly, not something most developers want to deal with.\nThat’s where go-kloset-sdk steps in.\nThis SDK abstracts all the hard parts. It lets you:\nWrite integrations exactly like the builtins Package them as standalone plugins with no boilerplate Convert existing builtins into plugins effortlessly Go Kloset SDK GitHub\nIt’s still evolving (interface changes may happen), but you can use it right now to build powerful plugins without the headache.\nWriting an Integration: Quick Guide # 1. Implement the Connector Interface # Example: A simple FTP source connector:\nfunc NewFTPImporter(...) (importer.Importer, error) func (p *FTPImporter) Close() error // produce the full list of resources to backup func (p *FTPImporter) Scan() (\u0026lt;-chan *importer.ScanResult, error) // provide a ReadCloser to a specific resource func (p *FTPImporter) NewReader(string) (io.ReadCloser, error) That’s really it. Connect to the resource, expose the data.\n2. Implement the Connector Interface # Second step is to write a main.go that registers the implementation to the SDK:\npackage main import ( \u0026#34;fmt\u0026#34; \u0026#34;os\u0026#34; \u0026#34;github.com/PlakarKorp/go-kloset-sdk/sdk\u0026#34; \u0026#34;github.com/PlakarKorp/integration-ftp/importer\u0026#34; ) func main() { if len(os.Args) != 1 { fmt.Printf(\u0026#34;Usage: %s\\n\u0026#34;, os.Args[0]) os.Exit(1) } err := sdk.RunImporter(importer.NewFTPImporter); if err != nil { panic(err) } } 3. Describe it in a Manifest # name: ftp description: ftp importer version: 0.1.0 connectors: - type: importer executable: ftpImporter homepage: https://github.com/PlakarKorp/integration-ftp license: ISC protocols: [ftp] 4. Build the Package # $ go build -o ftpImporter ./plugin/importer $ plakar pkg create manifest.yaml You’ll get a ftp-v0.1.0.ptar you can install:\n$ plakar pkg add ftp-v0.1.0.ptar And voilà — ftp:// becomes a fully-supported import source in your setup.\nSay Hello to the IMAP Integration # To celebrate the SDK release, we’re also launching a new IMAP integration!\nIMAP Integration GitHub\nIt\u0026rsquo;s still early-stage and doesn’t yet support custom AUTH providers, but it already lets you:\nBackup your email: # $ plakar source add IMAPsrc imap://imap.mydomain.com:143 \\ username=myuser password=mypassword tls=starttls $ plakar backup @IMAPsrc Restore your email: # $ plakar destination add IMAPdst imap://imap.alsomydomain.com:143 \\ username=alsomyuser password=alsomypassword tls=starttls $ plakar restore -to @IMAPdst \u0026lt;snapid\u0026gt; Full instructions are available in the integration’s README — we’d love your feedback!\nWhat\u0026rsquo;s next ? # We’re not done yet — this is just the start.\nExpect two more integrations this week, and more in the coming weeks. We already know what’s dropping tomorrow and Friday\u0026hellip; but we’re keeping it a surprise for now 😏\n⭐ Star us on GitHub, join our community on Discord, and be part of shaping the future of plakar.\nSee you tomorrow!\n— The Plakar Team\n","date":"15 July 2025","externalUrl":null,"permalink":"/posts/2025-07-15/go-kloset-sdk-is-live/","section":"Plakar Blog","summary":"want to craft a ptar archive but you don’t need a full-fledged backup solution ? here comes kapsul, our ptar-specific tool, providing all you need from building to restoring and inspecting.","title":"go-kloset-sdk is live!","type":"posts"},{"content":"","date":"15 July 2025","externalUrl":null,"permalink":"/tags/kapsul/","section":"Tags","summary":"","title":"Kapsul","type":"tags"},{"content":"","date":"15 July 2025","externalUrl":null,"permalink":"/tags/ptar/","section":"Tags","summary":"","title":"Ptar","type":"tags"},{"content":"","date":"11 July 2025","externalUrl":null,"permalink":"/tags/chunking/","section":"Tags","summary":"","title":"Chunking","type":"tags"},{"content":" TL;DR: # Modern data systems suffer from redundancy—wasting time, compute, bandwidth and storage on duplicate content. Traditional methods of compression don’t help enough, especially when you need cross-file, shift-resilient deduplication in large, encrypted datasets.\nThat’s why we built and released go-cdc-chunkers: an open source and ISC-licensed high-performance Go package for Content-Defined Chunking (CDC), optimized for deduplication and resilience against data shifts.\nUnlike traditional compression, CDC enables fine-grained, shift-resilient deduplication, ideal for a wide range of uses including backup, synchronization, storage, and distributed systems.\nThe problem of duplication # Every time your system moves, stores, or processes duplicated data, it’s doing work it doesn’t need to. That means longer sync times, higher cloud egress fees, bloated containers, over-provisioned caches, and users waiting for things that should’ve been instant. Multiply that by thousands of files, logs, messages, or binary blobs—and the inefficiency compounds rapidly. The more data you touch, the more painful and expensive that duplication becomes.\nIn our business, where we need to process large amount of data, transfer it and store it for extended periods of time, duplication is a nightmare: it increases processing time, compute resources usage, transfer time and cost, pressure on storage and space needed. Duplication wastes time and money at every - single - step.\nThe solution ? Deduplication.\nDeduplication isn’t just for backups. It’s for anything that handles recurring or repetitive data: real-time collaboration tools, object storage systems, build artifact pipelines, CI/CD caches, logging infrastructures, messaging queues, document editors, and package registries. If your users upload revisions, move large files across services, or repeatedly generate similar outputs, you’re likely storing and reprocessing the same data again and again—sometimes byte-for-byte.\nBy deduplicating at the right layer—whether file-level, block-level, or chunk-level—you avoid wasting resources on what\u0026rsquo;s already known. You free up CPU cycles for meaningful computation, reduce latency across your stack, shrink your operational footprint, and make your systems leaner and faster. And if you\u0026rsquo;re paying per gigabyte, per operation, or per millisecond? You\u0026rsquo;re literally buying back time and money.\nHere comes the go-cdc-chunkers package # To help developers build smarter, leaner systems that avoid redundant work, we’re releasing v1.0.0 of go-cdc-chunkers—an open-source, ISC-licensed library for high-performance Content-Defined Chunking (CDC) in Go.\nIt provides a framework to easily support new algorithms as research advances in the field, and provides implementations for several algorithms including our optimized version of FastCDC, our Keyed variant of FastCDC (discussed in this post), an implementation of the JumpCondition optimization and even the more recent UltraCDC.\nThis package is designed to make it easy to slice data into variable-sized, content-aware chunks that are resilient to shifts and edits—perfect for deduplication, delta encoding, change tracking, and more.\nWhether you\u0026rsquo;re building synchronization tools, blob stores, data pipelines, or just want to avoid wasting time and compute on repeated data, go-cdc-chunkers gives you the primitives you need to chunk content efficiently and predictably.\nAlgorithm Nanoseconds per operation Throughput Restic_Rabin 1932542209 ns/op 555.61 MB/s Askeladdk_FastCDC 579593250 ns/op 1852.58 MB/s Jotfs_FastCDC 448508056 ns/op 2394.03 MB/s Tigerwill90_FastCDC 377360430 ns/op 2845.40 MB/s Mhofmann_FastCDC 572578979 ns/op 1875.27 MB/s PlakarKorp_FastCDC 117534472 ns/op 9135.55 MB/s PlakarKorp_KFastCDC 115304560 ns/op 9312.22 MB/s PlakarKorp_UltraCDC 79441967 ns/op 13516.05 MB/s PlakarKorp_JC 49784102 ns/op 21567.97 MB/s It’s very fast, very memory-conscious, and production-ready, with a clean API that fits into streaming and batch workflows alike. We\u0026rsquo;re releasing it not just as part of our internal stack, but as a practical tool for any developer who needs data to be handled smartly—only once, not over and over.\noh\u0026hellip; and it\u0026rsquo;s trivial to use:\nchunker, err := chunkers.NewChunker(\u0026#34;fastcdc\u0026#34;, rd) if err != nil { log.Fatal(err) } offset := 0 for { chunk, err := chunker.Next() if err != nil \u0026amp;\u0026amp; err != io.EOF { log.Fatal(err) } chunkLen := len(chunk) fmt.Println(offset, chunkLen) if err == io.EOF { // no more chunks to read break } offset += chunkLen } But what is deduplication? # Deduplication is the process of identifying and eliminating repeated chunks of data within a larger dataset.\nWhy bother?\nBecause avoiding redundant work—like redoing computations, re-transferring data over the network, or storing the same content multiple times—saves compute cycles, bandwidth, I/O, and storage space. It reduces resource consumption, speeds up processing, and frees up capacity to do more useful work within the same time constraints.\nIn short: do it once, reuse the result, and move faster.\nSo it\u0026rsquo;s compression basically? # Every time we mention deduplication, people confuse it with compression.\nLet\u0026rsquo;s start by refreshing what compression does before going into deduplication, this way we can better understand how they differ.\nFrequency-based compression # A compressor processes data and tries to identify frequently occurring sequences of bytes, replacing them with shorter ones. To help illustrate, let’s use a simplified example.\nI have a cat a beautiful cat an annoying cat but still a beautiful cat she does not know she is a cat but she does cat things From the text above, we extract the following tokens (bit encoding left for reference):\nToken Bit Encoding (UTF-8 bytes) I 01001001 have 01101000 01100001 01110110 01100101 a 01100001 cat 01100011 01100001 01110100 beautiful 01100010 01100101 01100001 01110101 01110100 01101001 01100110 01110101 01101100 an 01100001 01101110 annoying 01100001 01101110 01101110 01101111 01111001 01101001 01101110 01100111 but 01100010 01110101 01110100 still 01110011 01110100 01101001 01101100 01101100 she 01110011 01101000 01100101 does 01100100 01101111 01100101 01110011 not 01101110 01101111 01110100 know 01101011 01101110 01101111 01110111 is 01101001 01110011 things 01110100 01101000 01101001 01101110 01100111 01110011 \\n 00001010 (space) 00100000 If we simply encode each byte using UTF-8, the line:\na beautiful cat\\n Would become 128 bits:\n01100001 00100000 01100010 01100101\ta be 01100001 01110101 01110100 01101001\tauti 01100110 01110101 01101100 00100000\tful 01100011 01100001 01110100 00001010\tcat\\n Instead of encoding individual bytes, we can treat repeated tokens (words, spaces, newlines) as units and assign each a shorter code. This enables compression if we use fewer bits for more frequent tokens.\nUsing a Huffman tree, we assign shorter codes to the most frequent tokens.\nToken Frequency Huffman Code (space) 20 0 \\n 5 111 a 4 1100 cat 4 1101 she 3 1010 beautiful 2 10110 does 2 10111 but 2 10000 I 1 100010 have 1 100011 an 1 100100 annoying 1 100101 still 1 100110 not 1 100111 know 1 100000 is 1 100001 things 1 1000100 With Huffman coding, the same sentence:\na beautiful cat\\n is now encoded as just 18 bits:\na (space) beautiful (space) cat \\n 1100 0 10110 0 1101 111 Compare that to the original 128 bits – that’s nearly 7× compression just from token frequency-based substitution.\nTo decompress, all we have to do is read the short code, look in our table was was the token they were encoding, and substitute it back.\nHuffman coding is a classic entropy-encoding method optimal for known frequencies. It\u0026rsquo;s lossless and widely used in formats like ZIP, JPEG, and others. In real compressors, tokens may include multi-byte sequences, patterns, or dictionary entries, but the general idea remains: swap repeating chunks of bits by smaller ones.\nIsn\u0026rsquo;t that deduplication then? # So\u0026hellip; if compression works so well, why not just use it to handle deduplication?\nAnd if Huffman coding isn’t ideal, surely modern compression techniques could handle large-scale deduplication more efficiently — right?\nNot quite. While compression and deduplication both aim to reduce storage size, their strategies and constraints differ significantly. Compression alone is not well-suited for deduplication at scale due to several inherent limitations — some solvable, others fundamental. For clarity, we’ll continue using Huffman coding as our baseline example, but the points apply broadly to more advanced algorithms as well.\nGlobal frequency analysis doesn\u0026rsquo;t scale # In our earlier example, we compressed a small input where it was feasible to scan the entire text, build a complete frequency table, and derive optimal Huffman codes. This works well for small datasets.\nBut if the input is massive — say, multiple terabytes — it\u0026rsquo;s impractical to process the entire data stream upfront just to compute token frequencies. Reading all the data before producing any output isn\u0026rsquo;t viable in real-world pipelines.\nStreaming compression vs. Global context # To address this, most compressors operate in streaming mode. They split the input into smaller chunks (blocks or windows that often range between a few KB to several MB), compute local frequencies within each, and build temporary codes or dictionaries on the fly.\nThis helps manage memory and compute, but comes at a cost:\nRedundancy across boundaries isn\u0026rsquo;t deduplicated. Compression is suboptimal because smaller chunks have less statistical context. Common sequences in different blocks are encoded differently, breaking any chance of global deduplication. Deduplication needs chunk identity, not just shorter codes # Deduplication isn\u0026rsquo;t just about encoding recurring patterns — it\u0026rsquo;s about recognizing and reusing identical data segments that may be distant one from another across space but also time: data segments that are produced far apart, but also in different files, today but also a week from now.\nCompression removes redundancy within a local window of data. Deduplication removes redundancy across large time- and space-separated segments.\nSo while compression and deduplication are conceptually aligned, they operate at different levels and under different constraints. Compression is great for making individual files smaller. Deduplication is about not storing the same thing twice — ever — even if it shows up a month apart in two backups.\nA few deduplication strategy # Now that we realize that compression doesn\u0026rsquo;t cut it for data deduplication, let\u0026rsquo;s see how deduplication has evolved a lot throughout the years.\nFor this article, let\u0026rsquo;s assume that we are backing up files with data in them. The same holds true for objects in an object storage, or blobs in a database, we just need a \u0026ldquo;resource\u0026rdquo; holding data and files is the simplest one to think of.\nMetadata matching # The first approach to data deduplication is to look at the metadata and decide from there if it\u0026rsquo;s even worth looking into the data itself.\nAn example of this, is for example looking at file name, size and last modification date if available. If I have a file that I have processed in the past and recorded the metadata for, then I could for example take the decision to not process it again if the metadata have not changed since then.\npackage main import ( \u0026#34;fmt\u0026#34; \u0026#34;os\u0026#34; ) type FileMeta struct { Name string Size int64 ModTime int64 } // seenFiles mimics previously seen file metadata var seenFiles = map[string]FileMeta{ \u0026#34;file1.dat\u0026#34;: { Name: \u0026#34;file1.dat\u0026#34;, Size: 1 \u0026lt;\u0026lt; 30, ModTime: 1620000000 }, } func isDuplicate(meta FileMeta) bool { for _, seen := range seenFiles { if meta.Size == seen.Size \u0026amp;\u0026amp; meta.ModTime == seen.ModTime { return true } } return false } func main() { // simulate a renamed copy with same content file, _ := os.Stat(\u0026#34;file_copy.dat\u0026#34;) // must exist on disk meta := FileMeta{ Name: file.Name(), Size: file.Size(), ModTime: file.ModTime().Unix(), } if isDuplicate(meta) { fmt.Println(\u0026#34;File skipped (duplicate by metadata).\u0026#34;) } else { fmt.Println(\u0026#34;File processed (new or changed).\u0026#34;) } } This is very efficient and nice, but since it doesn\u0026rsquo;t look at the data at all\u0026hellip; rename the file, update the metadata without changing the content or copy the file so you have another identical copy of it aside with a different name, and deduplication collapses.\nSome tools still rely on this method for the purpose of deduplication, but more often modern tools use metadata matching solely for caching purpose combined with a more modern approach.\nExact content matching # With this second approach, the content is looked over to find an exact match.\nTo perform deduplication, data is passed through a function that produces a content identifier of some sort (generally a cryptographic digest) that can be recorded in an index.\nWhen processing new data, if the content identifier is already in the index, then the data was already recorded and we can skip some heavier operations.\npackage main import ( \u0026#34;crypto/sha256\u0026#34; \u0026#34;fmt\u0026#34; \u0026#34;io\u0026#34; \u0026#34;os\u0026#34; ) var seenHashes = map[string]bool{} func computeHash(path string) (string, error) { file, err := os.Open(path) if err != nil { return \u0026#34;\u0026#34;, err } defer file.Close() hash := sha256.New() if _, err := io.Copy(hash, file); err != nil { return \u0026#34;\u0026#34;, err } return fmt.Sprintf(\u0026#34;%x\u0026#34;, hash.Sum(nil)), nil } func isDuplicate(path string) bool { sum, err := computeHash(path) if err != nil { fmt.Println(\u0026#34;Error:\u0026#34;, err) return false } if seenHashes[sum] { return true } seenHashes[sum] = true return false } func main() { file := \u0026#34;data.bin\u0026#34; // path to the file if isDuplicate(file) { fmt.Println(\u0026#34;Duplicate file detected, skipping...\u0026#34;) } else { fmt.Println(\u0026#34;New content, processing...\u0026#34;) } } This has two shortcomings:\nthe entire file has to be read before knowing if it\u0026rsquo;s a duplicate if a single bit is changed, the entire file is considered as not being duplicate If we have a 1TB file, we must first read 1TB of data and compute a digest out of it, THEN, only when we\u0026rsquo;re done, we know if we need to do something with that data. If we just append a new-line to the file\u0026hellip; well, it\u0026rsquo;s a new 1TB file even if the rest is unchanged—making this approach costly for large files with minor edits.\nFixed-Size Chunking # Now, that\u0026rsquo;s a much more interesting approach.\nInstead of considering the data as a whole, it is split into fixed-size chunks that are evaluated individually. A 1TB file could for example be split into 1024 chunks of 1GB, then a digest could be computed for each of these chunks and recorded in an index to mark them as seen.\nThis is effectively as if we had split our file into multiple smaller ones of fixed size, then performed an exact content match on each of them individually, while keeping track that they should be glued one to another to produce the original file.\nWhen processing new data, we split it into chunks of 1GB and compute their digests to look them up in the index: if a digest is found, the chunk is skipped as we don\u0026rsquo;t need to process it, otherwise it means we never saw it or at least a bit was altered so it needs to be processed and its digest recorded for future runs to skip it.\npackage main import ( \u0026#34;crypto/sha256\u0026#34; \u0026#34;fmt\u0026#34; \u0026#34;io\u0026#34; \u0026#34;os\u0026#34; ) const chunkSize = 1024 * 1024 // 1MB var seenChunks = map[string]bool{} func processFile(path string) { file, err := os.Open(path) if err != nil { fmt.Println(\u0026#34;Open error:\u0026#34;, err) return } defer file.Close() buf := make([]byte, chunkSize) chunkIdx := 0 for { n, err := file.Read(buf) if n == 0 || err == io.EOF { break } sum := sha256.Sum256(buf[:n]) key := fmt.Sprintf(\u0026#34;%x\u0026#34;, sum) if seenChunks[key] { fmt.Printf(\u0026#34;Chunk %d skipped (dup)\\n\u0026#34;, chunkIdx) } else { fmt.Printf(\u0026#34;Chunk %d processed (new)\\n\u0026#34;, chunkIdx) seenChunks[key] = true } chunkIdx++ } } func main() { processFile(\u0026#34;data.bin\u0026#34;) } The size of chunks depends on your use-case (ie: you might want small chunks for text and big chunks for video), but it remains fixed within a file.\nThis method has the advantage that chunks can be efficiently processed as the data does not have to be read byte-by-byte, but using fixed-buffer reads that can take advantage of many optimizations to make it extremely fast\u0026hellip;\n\u0026hellip; but the downside is that fixed-size method implies that data is seen as a global structure where data exists at static offsets\u0026hellip; add or remove one byte, and the whole structure beyond that point is shifted, the offset have not moved but all of the chunks are no longer aligned with their previous offsets and are considered new causing the deduplication to fall apart.\nContent-Defined Chunking # Finally, there\u0026rsquo;s CDC !\nCDC builds upon the idea of Fixed-Size Chunking: split an input into smaller chunks so that the whole data doesn\u0026rsquo;t have to be reprocessed in case of a single bit change\u0026hellip; but it doesn\u0026rsquo;t use a global structure and static offsets so that it can recover if data is shifted.\nIt uses a function to process the data and cut it into chunks of varying size\u0026hellip;using the data itself to decide where to produce the cutpoints. This means that running the function on the same data twice produces the same cutpoints and the same series of chunks, but altering a single bit causes the cutpoint to be shifted in the stream and produce a different series of chunks. Since we still compute digests on chunks to record them in an index, the ones that are found are skipped and the others lead to new records.\nIf you\u0026rsquo;ve followed so far, this should raise the following question:\nBut\u0026hellip; if a cutpoint has been shifted, doesn\u0026rsquo;t it shift all subsequent ones?\nAnd the answer is: no.\nThe function that processes the data only looks at a relatively small window of data and computes a rolling digest to decide if it should insert a cutpoint or not.\nBecause this is a rolling digest, if a change has caused a new cutpoint to be inserted, then after we have read a certain amount from this new cutpoint\u0026hellip; the chunker resumes producing the same boundaries once the rolling window exits the modified region.\npackage main import ( \u0026#34;crypto/sha256\u0026#34; \u0026#34;fmt\u0026#34; \u0026#34;os\u0026#34; \u0026#34;github.com/PlakarKorp/go-cdc-chunkers\u0026#34; ) var seen = map[string]bool{} func processCDC(path string) { file, err := os.Open(path) if err != nil { fmt.Println(\u0026#34;Failed to open:\u0026#34;, err) return } defer file.Close() chunker, err := chunkers.NewChunker(\u0026#34;fastcdc\u0026#34;, file) // or ultracdc if err != nil { log.Fatal(err) } i := 0 for { chunk, err := chunker.Next() if err != nil \u0026amp;\u0026amp; err != io.EOF { log.Fatal(err) } sum := sha256.Sum256(chunk.Data) key := fmt.Sprintf(\u0026#34;%x\u0026#34;, sum) if seen[key] { fmt.Printf(\u0026#34;Chunk %d skipped (dup, %d bytes)\\n\u0026#34;, i, len(chunk.Data)) } else { fmt.Printf(\u0026#34;Chunk %d processed (new, %d bytes)\\n\u0026#34;, i, len(chunk.Data)) seen[key] = true } } } func main() { processCDC(\u0026#34;data.bin\u0026#34;) } While the algorithm is more complex than fixed-size chunking, FastCDC remains extremely fast thanks to its streamlined rolling hash and precomputed table—often outperforming naive fixed-size methods in practice.\nSo, what is FastCDC ? # FastCDC is a high-performance variant of CDC first introduced by researchers in 2016, followed by further improvements in subsequent publications.\nIt was designed to preserve the benefits of CDC, but with a much faster decision process and more balanced chunk size distributions.\nWhere traditional CDC uses expensive sliding window techniques to compute rolling fingerprints at each byte, FastCDC introduces several optimizations.\nHow FastCDC works # FastCDC uses the Gear fingerprinting function—a technique that computes a rolling hash by XOR-ing precomputed values from a random table with incoming byte values. This replaces the more CPU-intensive Rabin fingerprinting used in classic CDC.\nFastCDC’s Gear table is precomputed at compile time:\n// chunkers/fastcdc/fastcdc_precomputed.go var G [256]uint64 = [256]uint64{ 0x4d65822107fcfd52, 0x78629a0f5f3f164f, 0xd5104dc76695721d, [...] 0x7e23bc6fc8214b8a, 0xeadaea4753b428d7, 0xaa80d0564cf20a65, } The overall flow looks like this:\nRolling hash calculation # For each byte, a new hash is computed based on the last value and a Gear table:\nhash = (hash \u0026lt;\u0026lt; 1) + G[data[i]]\nThis can be done efficiently without a sliding window buffer, which speeds up processing considerably.\nCutpoint decision # A chunk boundary is declared when a bitmask condition is satisfied:\nif hash \u0026amp; mask == value → cutpoint\nThe mask is derived from the target average chunk size, ensuring chunks are distributed around that target with controlled variability.\nSmart window bounds # FastCDC avoids very small or very large chunks by using minimum and maximum window sizes before checking for cutpoints, smoothing chunk distribution.\nThis gives FastCDC several advantages as it can do byte-at-a-time processing with no need for an N-byte rolling buffer, it is cache-friendly thanks to the fixed Gear table and simple operations, and it has predictable performance with adjustable minimum, average and maximum chunk size bounds.\nData: [ A, B, C, D, ... ] Gear table: [ G[A], G[B], G[C], ... ] Rolling hash: H = ((H \u0026lt;\u0026lt; 1) + G[Data[i]]) This loop is tight, fast, and easy to implement in Go. Better yet, it avoids memory pressure by not needing to keep large buffers in memory between chunk decisions.\nAs a matter of fact, pre-computed Gear table set aside, our own optimized implementation fits in just a few lines:\nfunc (c *FastCDC) Algorithm(options *ChunkerOpts, data []byte, n int) int { MinSize := options.MinSize MaxSize := options.MaxSize NormalSize := options.NormalSize const ( MaskS = uint64(0x0003590703530000) MaskL = uint64(0x0000d90003530000) ) switch { case n \u0026lt;= MinSize: return n case n \u0026gt;= MaxSize: n = MaxSize case n \u0026lt;= NormalSize: NormalSize = n } fp := uint64(0) i := MinSize mask := MaskS p := unsafe.Pointer(\u0026amp;data[i]) for ; i \u0026lt; n; i++ { if i == NormalSize { mask = MaskL } fp = (fp \u0026lt;\u0026lt; 1) + G[*(*byte)(p)] if (fp \u0026amp; mask) == 0 { return i } p = unsafe.Pointer(uintptr(p) + 1) } return i } Why FastCDC matters # In practice, FastCDC provides CDC-grade resilience to shifted data while running at much higher speed than traditional Rabin variants.\nThis makes it a near drop-in for backup systems, object stores, and delta encoders where throughput matters:\nAlgorithm Nanoseconds per operation Throughput Rabin 1932542209 ns/op 555.61 MB/s FastCDC 117534472 ns/op 9135.55 MB/s Its controlled chunk size variance is especially valuable for deduplication systems, which benefit from avoiding too-small (overhead) or too-large (inefficient reuse) chunks.\nKeyed CDC # Following a recent paper on attacks that target CDC algorithms, not only did we introduce mitigations in plakar itself, but we also introduced a Keyed CDC mode in our go-cdc-chunkers package and added support for a Keyed FastCDC implementation.\nAs we saw in the previous section, FastCDC relies on a Gear table to perform its rolling hash calculation and take its cutpoint decision. The values need to be generated randomly to benefit from proper bit distribution and avoid biases that would undermine chunk distribution, but once generated they need to remain the same so that identical input data produce the same cutpoints between runs: the table is usually built-in and considered a public information.\nWhile these values are public and supposedly not sensitive, they still have the disadvantage that cutpoints are predictable by everyone. If I share a file with you, you can determine what the cutpoints for that file will be on my machine by running the chunking on yours. The side-effect of that, is that if you have a list of chunk sizes but not their content, it can help you determine if a file you know is present within these chunks. Depending on your use of CDC, this may or may not be a privacy concern.\nWith Keyed FastCDC, a key is provided upon chunker initialization. It is used to setup a Keyed BLAKE3 hasher from which an alternate Gear table is derived, a Keyed Gear table if you will. Using the same key produces the same Keyed Gear table with similar cutpoints between two runs, whereas using a different key produces a different table and therefore different cutpoints: for a file you know, you can no longer predict the cutpoints generated by someones\u0026rsquo; chunker using a key you don\u0026rsquo;t know.\nThe good part is that this Keyed mode bears absolutely no performance cost, it is a fast computation that\u0026rsquo;s only done at chunker initialization, it is essentially free and there to be used when privacy is a concern.\nAlgorithm Nanoseconds per operation Throughput FastCDC 117534472 ns/op 9135.55 MB/s KFastCDC 115304560 ns/op 9312.22 MB/s To our knowledge, no other CDC library offers a keyed mode for FastCDC, so\u0026hellip;here\u0026rsquo;s some R\u0026amp;D for you straight from Plakar Korp\u0026rsquo;s lab :-)\nConclusion # Our package is open source and distributed under the permissive ISC-license. It is free for you to use in any application, including commercial ones.\nFeel free to hop in our Discord channel and ask for help if you want to integrate it somewhere, make improvements to it, or add support for new algorithms.\nIt can be used for a wide range of use-cases, so we are curious to see what you can build with it!\n","date":"11 July 2025","externalUrl":null,"permalink":"/posts/2025-07-11/introducing-go-cdc-chunkers-chunk-and-deduplicate-everything/","section":"Plakar Blog","summary":"We released go-cdc-chunkers, our open source library to provide Content-Defined Chunking. Here’s why deduplication is important.","title":"Introducing go-cdc-chunkers: chunk and deduplicate everything","type":"posts"},{"content":"","date":"11 July 2025","externalUrl":null,"permalink":"/tags/kloset/","section":"Tags","summary":"","title":"Kloset","type":"tags"},{"content":" TL;DR: # We recently introduced our ptar archive format and the feedback was good, but many people felt like this was too tied to the plakar backup solution: if you just want to use a deduplicated archive solution, why should you install a full backup software?\nToday, we unveil kapsul, an ISC-licensed open-source tool dedicated to creating and consuming ptar archives. It only does a subset of what plakar does, but has less requirements and an even simpler interface with zero configuration and no need for an agent.\nThis short post tells you all you need to know to get started testing it.\nWhat Is kapsul? # The kapsul utility allows creating ptar archives, aka. capsules, that contain deduplicated, compressed, content-addressed, strongly encrypted data.\nWhat sets it apart from plakar, is that by accepting a trade-off and not supporting some of the features that plakar does, kapsul can be implemented as an agentless and zero-configuration tool.\nIn other words: you install, you run, it works right away.\n$ go install github.com/PlakarKorp/kapsul@v0.0.0-beta.10 go: downloading github.com/PlakarKorp/kapsul v0.0.0-beta.10 $ kapsul -f /tmp/bleh.ptar create /private/etc repository passphrase: repository passphrase (confirm): $ kapsul -f /tmp/bleh.ptar ls repository passphrase: 2025-07-07T21:51:57Z bff68fc7 3.1 MB 0s /private/etc With just these three commands, you go from nothing to a /tmp/bleh.ptar that has all of these properties (shameless copy-paste from previous article):\nImmutable — it is write-once, tamper-evident by design. Deduplicated — it is content-addressed, chunks are referenced multiple times. Compressed — it has post-deduplication compression to save more space. Encrypted — it uses same audited model as underlying Kloset store. Versioned — it supports granular inspection of previous states. Browsable — it support browsing via CLI or UI, without full extraction. (Trans)Portable — it works on a USB stick, offline machine, or tape. So What Are The Trade-Offs? # To be zero-config AND agent-less, kapsul only supports a limited set of integrations and does not have the same level of caching as plakar to accelerate the processing of previously archived data.\nLong story short, it can archive data coming from the local filesystem, a remote SFTP, stdin and another .ptar archive (we will soon extend to other archive formats to allow conversions into .ptar).\nWhile kapsul is simpler, it does not support advanced features like remote read-write targets, custom scheduling, or complex caching mechanisms that plakar provides. If you need more granular control over your backup process or advanced features, plakar might still be the better fit.\nWhat Can I Do With It? # That\u0026rsquo;s the nicest part.\nMOST of what plakar can do\u0026hellip; kapsul can do.\nYou can craft a ptar, list or display its content, launch a ui, preview files (including media), \u0026hellip;\n$ kapsul -f /tmp/bleh.ptar create ~/Downloads repository passphrase: repository passphrase (confirm): $ kapsul -f /tmp/bleh.ptar ls repository passphrase: 2025-07-07T23:18:33Z 2a92ad6f 23 GB 55s /Users/gilles/Downloads $ kapsul -f /tmp/bleh.ptar ls 2a92ad6f | tail -5 repository passphrase: 2025-01-07T08:54:15Z -rw-r--r-- gilles staff 75 kB d0f1_01.pdf 2025-01-07T08:54:40Z -rw-r--r-- gilles staff 75 kB d0f1_02.pdf 2025-01-07T08:54:56Z -rw-r--r-- gilles staff 75 kB d0f1_03.pdf 2025-01-07T08:55:10Z -rw-r--r-- gilles staff 75 kB d0f1_04.pdf 2025-04-01T21:41:59Z -rw-r--r-- gilles staff 1.9 MB we-simpsons.png $ kapsul -f /tmp/bleh.ptar cat 2a92ad6f:plakar_1.0.0-throwaway.0_checksums.txt repository passphrase: ffdbd5e4f9748038917b7f7d3307292bd8492ba1849f8ef12a6f1937900e6a6f plakar_1.0.0-throwaway.0_darwin_amd64.tar.gz 0402b978646105478e530010ff3d2c182885f083775842d881df7101b2abe142 plakar_1.0.0-throwaway.0_darwin_arm64.tar.gz fc32c6a3f1c5fe4867c0ee71f285b0b2fd4e6d6bc480a9cf9714946e23629a43 plakar_1.0.0-throwaway.0_freebsd_386.tar.gz 1c0ccc9083c932e572666b0896025f69fb5115c2dd2cc88eac670f9e223b31f3 plakar_1.0.0-throwaway.0_freebsd_amd64.tar.gz 192d09e94bb9fee80eb37759510fad8fa8578dde296405e7d4f5a90a6836ab80 plakar_1.0.0-throwaway.0_freebsd_arm64.tar.gz ee3ffff41412b5257f59f845d465e6982682873bb0775096039492574a5c479a plakar_1.0.0-throwaway.0_linux_386.tar.gz cefdee40f6caa7b261129fc423a96783f0923c764f8340ee2675889f1141ba75 plakar_1.0.0-throwaway.0_linux_amd64.tar.gz 175c57268939065985acc0cd606ac71446c36f45b0f2ba01546baa0c786208d3 plakar_1.0.0-throwaway.0_linux_arm64.tar.gz d32091ad7552bb2c36f75caa4ae74dcf14fe2a53f70d593232a33f69abbb0359 plakar_1.0.0-throwaway.0_openbsd_amd64.tar.gz d4d6e8661b34dffed96dc12b71b1070de3a8ce679177dea62661800e49d75184 plakar_1.0.0-throwaway.0_openbsd_arm64.tar.gz $ kapsul -f /tmp/bleh.ptar ui repository passphrase: and some of the commands that are currently not supported, like mount and the like, are going to be implemented in the next few weeks providing very powerful capabilities to kapsul.\nOooooh, and maybe I\u0026rsquo;m the only one finding that cool, but\u0026hellip;\nIt even supports accessing a remote ptar over http/https for random access fetching. Accessing a remote .ptar over HTTP/HTTPS allows you to fetch specific parts of an archive without downloading it in full:\n$ kapsul -f https://poolp.org/test.ptar ls 2025-06-02T19:43:53Z ed0f6603 3.1 MB 0s /private/etc $ kapsul -f https://poolp.org/test.ptar ui All of the commands that work on a local archive can be used on a remote one, and will transparently use optimized random accesses so the archive is never fetched fully.\nConclusion # A week ago, we released ptar as a plakar subcommand and some people thought it should be a separate tool. A week later, you have a new standalone tool, kapsul, that\u0026rsquo;s ISC-licensed, open-source, free, and that lets you build your own little secure vaults in seconds.\nIn case we need to state the obvious, we care about users \u0026lt;3\n","date":"7 July 2025","externalUrl":null,"permalink":"/posts/2025-07-07/kapsul-a-tool-to-create-and-manage-deduplicated-compressed-and-encrypted-ptar-vaults/","section":"Plakar Blog","summary":"want to craft a ptar archive but you don’t need a full-fledged backup solution ? here comes kapsul, our ptar-specific tool, providing all you need from building to restoring and inspecting.","title":"Kapsul: a tool to create and manage deduplicated, compressed and encrypted PTAR vaults","type":"posts"},{"content":"Hi, I’m Julien, co-founder of Plakar.\nBefore we built this, I spent years as an engineer and later as a manager of infra teams. We handled backups, compliance, and recovery.\nIn every place, startups, big companies, regulated sectors, I saw the same routine:\ntar -czf archive.tgz /some/folder We all love that command. But in 2025, it can cause trouble.\nWhat’s changed since .tgz was invented # Back when tar came out in 1979 or even gzip came out in 1994, things were simple:\nData was small, just a few megabytes. Storage was local and trusted. Versioning was not a big deal. Archives ran in one pass, so you had to decompress everything to get one file. Now none of that fits our needs.\nOver the years data grew huge, like terabytes of logs or model checkpoints. We rely on multi‑core work to finish weeks of processing in minutes. We must assume zero trust, so we need proof no one changed anything. Data sits in S3 and other object stores, not on a local disk. We need to track versions and snapshots. And we often want a single file instantly, without waiting for a full decompress.\nPlain old .tgz was never made for this.\nWhy .tgz does not work with S3 # On a traditional POSIX filesystem, many teams run periodic .tgz snapshots of local disks or NFS shares. By contrast, S3 buckets are rarely backed up (a rather short-sighted approach for mission-critical cloud data), and even one-off archives are rarely done.\nIf you want to archive an S3 bucket with tar and gzip, you:\nDownload everything to your machine (generating storage cost and/or storage cost). Run tar. Maybe encrypt separately. Calculate checksums by hand. Upload back your archive somewhere else. Then, if you need to prove integrity or restore just one file, you’re stuck. .tgz can’t help. This process is slow, error-prone, and costly. It does not scale to large datasets or S3 buckets.\nWhat we needed instead # We realized we needed an archive that could:\nremove duplicate data automatically to limit storage and transfer costs encrypt by default to protect sensitive data store snapshots and history check integrity with cryptography talk to S3 and other object stores directly let you restore parts of an archive on demand That led us to create Plakar for Backup, it\u0026rsquo;s storage engine Kloset and now .ptar the flat file version of Kloset.\nHow .ptar works # Instead of a simple byte stream, a .ptar archive is a self‑contained, content‑addressed container.\nHere is what it gives you:\ndeduplication: identical chunks stored once, even across snapshots built‑in encryption: no extra step tamper evidence: any change breaks the archive versioning: keep many snapshots easily S3 native: one command to archive a bucket partial restores and browsing: pick a file without unpacking it all fast targeted restores: grab one file in seconds A simple example # Suppose I have 11 GB in my Documents and two copies of the same folder:\n$ du -sh ~/Documents 11G /Users/julien/Documents $ tar -czf test.tgz ~/Documents ~/Documents Result: about 22 GB compressed.\nWith .ptar:\n$ plakar ptar -plaintext -o test.ptar ~/Documents ~/Documents Result: about 8 GB. Why? .ptar sees the duplicate folder once.\nIn many real-world datasets, a large amount of data is actually redundant: multiple copies, backups, archives, or repeated files across folders. Traditional tools like tar compress everything, even duplicates, which unnecessarily increases the size of the archive. .ptar works differently: it automatically detects and removes duplicates, so each unique chunk is stored only once, no matter how many times it appears. That is why, in the example above, .ptar produces a much smaller archive than .tgz. At large scale, the space savings become significant.\nWhen .tgz still makes sense # I admit, .tgz is everywhere:\nIt runs almost anywhere, no dependencies. It is great for small, throwaway archives. But when you need trust, speed, and scale, .ptar is built for 2025.\nTry .ptar # Get the dev build:\n$ go install github.com/PlakarKorp/plakar@v1.0.3-devel.c7a66f1 Then:\narchive a folder:\n$ plakar ptar -o backup.ptar ~/Documents archive an S3 bucket:\n$ plakar ptar -o backup.ptar s3://my-bucket list contents:\n$ plakar at backup.ptar ls restore files:\n$ plakar at backup.ptar restore -to ./restore /Documents/config.yaml inspect one file:\n$ plakar at backup.ptar cat snapshotid:/path/to/file mount a UI:\n$ plakar at backup.ptar ui About .ptar and Plakar # .ptar is part of Plakar, our open‑source backup engine for immutable, deduplicated, and encrypted data. It is in the Plakar CLI today, and soon will ship as a standalone binary if you only need archiving.\nThe code is open source, so feel free to contribute or give feedback.\n.ptar and Plakar are doing the biggest difference on datasets with lots of redundancy, such as:\nBackups with multiple versions of the same files or folders Email, photo, or document archives containing duplicates S3 buckets with snapshots, backups, or files shared across projects Scientific datasets or logs where many files are identical or very similar Training datasets for machine learning, where many files are duplicated or very similar across different versions or experiments. Conclusion # Archiving has changed. Data is bigger, trust is lower, and we want fast access. If you still use .tgz for all that, you are taking a risk and wasting time/money.\n.ptar is not just another tar. It is designed for today’s needs. And this is only the start. We plan more speed, smarter dedupe, standalone binary and smaller metadata.\n","date":"30 June 2025","externalUrl":null,"permalink":"/posts/2025-06-30/technical-deep-dive-into-.ptar-replacing-.tgz-for-petabyte-scale-s3-archives/","section":"Plakar Blog","summary":".tgz made sense in 1994, but today we need archiving that supports deduplication, encryption, S3, and zero trust. here’s why we built .ptar.","title":"Technical deep dive into .ptar: replacing .tgz for petabyte-scale S3 archives","type":"posts"},{"content":"Now that I caught your attention with my mad clickbait skills\u0026hellip; let me explain why this is not complete clickbait, the last reason will surprise you 😊\nAttention, Attention !\nThe feature described in this article is a testing feature, meaning that it is stable enough to be tested by users but not yet available for general consumption.\nYou will be able to test it right away by installing our latest development release:\n$ go install github.com/PlakarKorp/plakar@v1.0.3-devel.c7a66f1 If you like this reading and want more of these tech articles about our work\u0026hellip;\nPlease share on social networks using the icons below the table of contents on your left, star our repo on Github and join us on Discord, where you can lurk to see what we\u0026rsquo;re working on or discuss with our developers ;-)\nThis is very important for us if you want to see us succeed ! 🙏\nTL;DR: # Backup archive formats haven\u0026rsquo;t evolved much since the early days of .tar and .zip.\nThey do their job—but they weren\u0026rsquo;t built for deduplication, encryption, or versioned datasets. Worse, they assume trust in the environment they run in. That\u0026rsquo;s a problem when you\u0026rsquo;re dealing with hybrid infrastructure, compliance requirements, or disaster recovery workflows that need to just work, offline, years from now.\n.ptar is our answer to that. It\u0026rsquo;s an archive format designed to encapsulate datasets into a single self-contained, portable, immutable, deduplicated and encrypted file. Think of it as .tar reimagined for zero-trust systems, deduplication, extraction-less fast content access and long-term data integrity.\nThis post explains what .ptar is, why we built it, how it works, and how to use it in practice.\nWhat Is .ptar ? # .ptar, pronounced p-tar, is an archive format designed to pack groups of resources (directories, files, objects, \u0026hellip;) into a single tamper-evident file (meaning that if it\u0026rsquo;s altered or corrupted, you\u0026rsquo;ll know).\nIt’s fully self-contained: data, metadata, structure, version history, and cryptographic integrity checks are embedded directly inside the file. Of course, it\u0026rsquo;s offline and you don’t need a remote service or a backend of any kind to browse or restore it: all you need is an offline .ptar reader such as our opensource tool plakar.\nA .ptar file is:\nImmutable — write-once, tamper-evident by design. Deduplicated — content-addressed, chunks are referenced multiple times. Compressed — post-deduplication compression to save more space. Encrypted — end-to-end, using same audited model as underlying Kloset store. Versioned — supports granular inspection of previous states. Browsable — via CLI or UI, without full extraction. (Trans)Portable — works on a USB stick, offline machine, or tape. If you’ve ever wanted to export a full backup for disaster recovery, legal archiving, or long-term cold storage — and still be able to introspect it years later — that’s exactly what .ptar is built for.\nIf you want to pack multiple assets in a content-addressed immutable file with built-in integrity validation, that\u0026rsquo;s also what .ptar is built for.\nBut if you just want to produce a .tar or .zip-like archive that comes packed with a ton of user-friendly features, well\u0026hellip; it\u0026rsquo;s built for that too 😊\nCreating a .ptar archive is as simple as the following command:\n$ plakar ptar -o test.ptar ~/Downloads passphrase: passphrase (confirm): The resulting file contains all of ~/Downloads, deduplicated, compressed, encrypted, cryptographically authenticated, easily transportable and immediately usable for restore:\n$ plakar at test.ptar ls 2025-06-24T20:12:53Z a2650f13 11 GB 36s /Users/gilles/Downloads $ plakar at test.ptar restore a2650f13:/ [...] Background # The .tar format has been around since 1979 and the .zip format was initially released a decade later, in 1989. .tar only handles data compaction, so it is often coupled with a compression algorithm such as gzip, bzip2, \u0026hellip; whereas .zip does not split both operations and does both compaction and compression.\nSince what really matters here is the compaction, I\u0026rsquo;ll only mention .tar from now on as both are relatively close in terms of how the archive is structured: more or less a simple stream of entry headers and data.\nAt the risk of surprising you, the .tar (for Tape ARchive) format was built for\u0026hellip; tape drives, and was optimized for sequential writes during compaction and sequential reads during extraction. It was not designed for randomly-accessed, encrypted, content-addressed data or long-term archival at scale.\nWhile sequential access made a lot of sense for tape archiving, at the risk of surprising you, again: users are not tape drives.\nNowadays, most people don\u0026rsquo;t save to tapes but rather to random-access storages. They manipulate their archives in a non-sequential pattern extracting them to browse specific files or directory without thinking a second about the order of operations.\nSo, not only do they no care about the benefits of sequential I/O patterns for tapes, but they also miss a ton of nice features that are hard/impossible to obtain with a linear structure, like for example, deduplication, restore-less browsing, fast searching, and more \u0026hellip;\nDoes that mean that .ptar is superior to .tar ?\nNope, they serve different purposes and target different users, they can be complementary\u0026hellip; though in my daily life using a .ptar offers far more benefits.\nA few words about deduplication and compression # Due to its structure and sequential I/O pattern in compaction/decompaction, .tar (and .zip, and alike\u0026hellip;) can\u0026rsquo;t rely on back-reference tricks to allow deduplication of previously-seen content (at least not without a major change of format).\nAs a result, it can\u0026rsquo;t perform deduplication at the file level and even less so at the data level: any redundant data is duplicated inside the archive.\nIt doesn\u0026rsquo;t matter, compression will take care of that !\nYeah, no, it won\u0026rsquo;t.\n$ du -sh ~/Downloads 11G /Users/gilles/Downloads $ time tar -czf test.tar.gz ~/Downloads [...] 166.48s user 14.07s system 99% cpu 3:01.53 total $ du -sh test.tar.gz 8.8G test.tar.gz $ time tar -czf test.tar.gz ~/Downloads ~/Downloads [...] 332.98s user 28.09s system 99% cpu 6:02.89 total $ du -sh test.tar.gz 18G test.tar.gz $ time tar -czf test.tar.gz ~/Downloads ~/Downloads ~/Downloads [...] 499.69s user 41.99s system 99% cpu 9:05.15 total $ du -sh test.tar.gz 26G test.tar.gz As seen above, .tar is not a dedup-aware archiver and if a file is passed twice, it is archived twice. The compression doesn\u0026rsquo;t cancel that because compression algorithms use a sliding window that can only \u0026ldquo;see\u0026rdquo; so far at a time. The very common gzip algorithm used here can only see 32KB, other algorithms may \u0026ldquo;see\u0026rdquo; up to several MB but still\u0026hellip; if the archive is several GB or several TB and the redundancy occurs further than the sliding window, the compression won\u0026rsquo;t cancel duplicate data.\nThat topic would require a deep-dive into how compression works, something a bit off-topic for this article, but let me know if you\u0026rsquo;re interested in such readings, me likey writing :-)\nAnyways, that\u0026rsquo;s a sharp contrast with .ptar:\n$ time plakar ptar -plaintext -o test.ptar ~/Downloads [...] 135.80s user 31.42s system 439% cpu 38.073 total $ du -sh test.ptar 8.2G test.ptar $ time plakar ptar -plaintext -o test.ptar ~/Downloads ~/Downloads [...] 134.91s user 31.09s system 438% cpu 37.892 total $ du -sh test.ptar 8.2G test.ptar $ time plakar ptar -plaintext -o test.ptar ~/Downloads ~/Downloads ~/Downloads [...] 134.60s user 30.74s system 438% cpu 37.727 total $ du -sh test.ptar 8.2G test.ptar Of course, I\u0026rsquo;m showing a .tar worst-case scenario here for dramatic effect, but as you can see even on the very first backup a gain is already visible when there\u0026rsquo;s redundancy in the source data:\n$ ls ~/Downloads|grep \u0026#39;(\u0026#39; update the btree even when the file was found in the cache #272 (1).mp3 update the btree even when the file was found in the cache #272 (2).mp3 Being dedup-aware .ptar can skip on files it already included, shrinking the processing time and overall size, but it can also skip on chunks of files so folders containing a lot of variations of a file with similar parts benefit from this.\nDoes .ptar always generate smaller archives ?\nNope, not always, as .ptar adds its own overhead to support indexing, integrity checking, etc\u0026hellip; an archive with absolutely no redundancy is going to generate a bigger archive than a plain .tar. However, the overhead is small enough that considering all the features it brings with it, I\u0026rsquo;d take the overhead anyday:\nability to serve the archive remotely with random seeks without full download a virtual filesystem navigation (mount with fuse, webdav, \u0026hellip;) a UI with preview, search and categorization for easy locating of content very granular restore and diff-ing at snapshot and file levels synchronization capabilities with other klosets\u0026hellip; Technically, you can host a .ptar on any HTTP server that supports range-requests, and access portions of it from a remote machine without doing a full read, something just not doable with a .tar or .zip, and which proves interesting with large archives for which you only need specific contents:\n$ plakar at ptar+https://plakar.io/test.ptar ui Again, no API, no backend, just a .ptar reader that you launch locally, either through the UI as shown here or through the CLI for some terminal action:\n$ ./plakar at ptar+https://plakar.io/test.ptar ls 2025-06-02T19:43:53Z ed0f6603 3.1 MB 0s /private/etc About encryption # .zip supports encryption, either through a widely supported legacy algorithm that has shown its weaknesses, or through the strong AES256\u0026hellip; that\u0026rsquo;s not supported by all .zip readers.\nNot much more to say about .tar except that it doesn\u0026rsquo;t have any provision for encryption, the common way to send encrypted tarballs is to use GPG\u0026hellip; and most of us know how that goes with the general public.\nContrast this again with .ptar that provides audited cryptography by default, producing an archive that has inherent MAC integrity check and that can\u0026rsquo;t be altered without validation failing visibly, but which can also generate plaintext archives for public consumption.\nAbout these limitations # Most archive tools today layer compression and optional encryption on top, but they still operate on cleartext inputs and offer zero insight into version history, deduplication, or even integrity guarantees without layering custom tooling.\nIn modern environments—hybrid cloud, air-gapped vaults, regulatory retention—these limitations aren’t just inconvenient, they’re unsafe. If you’re backing up critical infrastructure, you need something that doesn’t leak metadata, doesn’t trust the storage backend, and makes it obvious if anything was tampered with or silently corrupted — even a decade later.\n.ptar was built as part of the Plakar project to solve this. It combines the Kloset engine’s immutable snapshot model with a transportable archive format: everything you need to restore or audit a backup lives inside a single file. No server dependency. No hidden assumptions.\n.ptar vs Traditional Archives # Traditional formats like .tar or .zip were never meant to handle encrypted, deduplicated, or versioned data. They operate on raw files, often in-place. .ptar flips that model: encryption comes first, then storage. Everything is content-addressed, immutable, and verifiable without restoring.\nHere’s how it stacks up:\nFeature ptar tar.gz / zip Encryption Strong by default None or bolt-on Deduplication Native, content-addressed Not supported Versioning Snapshot-aware Not supported Tamper Resistance Cryptographically signed No integrity model or simple CRC Restoration Selective, no extraction Full read or extraction required Portability Self-contained .ptar file Can require additional tools (ie: gpg) Offline Usability Fully offline Fully offline Storage Efficiency High (dedup + compression) Linear, redundant Bottom line: .ptar is built for environments where trust is minimal, bandwidth is constrained, and storage needs to be smart. If you\u0026rsquo;re archiving for compliance or disaster recovery, it doesn\u0026rsquo;t make sense to wrap modern data in a 1979 format. (ouh yeah, I plugged my catchphrase 💪).\nTechnical Architecture # A .ptar archive is a fully self-contained container.\nInternally, it’s structured to preserve everything needed to inspect, verify, and restore one or more Kloset snapshots — without external resources.\nAt a low level, a .ptar consists of:\na configuration: version and params for dedup, compression and encryption. a data section: blobs storing the archive structure, metadata and data: content-addressed, deduplicated, compressed and encrypted. This section has an index of MAC for all its blobs to provide integrity checking at blob level. an index section: lookup tables to provide fast inspection. a footer: offsets to relevant sections of the archive for immediate seek. MAC: MAC of the entire archive for overall integrity check. Here\u0026rsquo;s a simplified overview:\nAll content inside a .ptar is:\nencrypted using the same encryption keys and schema as \u0026ldquo;regular\u0026rdquo; Kloset stores. integrity-checked — any corruption or tampering is detectable before extraction. The archive is designed for streaming and partial access:\nYou can browse contents with a CLI or UI without extracting anything. You can extract a single file without reading the entire archive. You can pipe it into a remote Plakar instance for restoration or inspection. This makes .ptar not just a backup format — but a reliable container for long-term, verifiable storage that travels with its own integrity guarantees.\nSome Real-World Use Cases # The .ptar format isn’t just a theoretical improvement over .tar.gz. It solves concrete problems in modern backup and archival workflows — especially where trust boundaries, compliance, or long-term durability are involved.\nAir-Gapped Backups # Export a .ptar archive to USB, store it in a vault, and forget about it.\nWhen you need it, everything — metadata, snapshots, file content — is inside, encrypted and verifiable. No runtime, no dependencies, no cloud needed.\nCold Storage # .ptar is optimized for random reads and high-density archiving. Snapshots remain deduplicated, compressed, and inspectable without restoring the full payload.\nDisaster Recovery # You can generate .ptar files as part of your offsite rotation. In a worst-case scenario, restoration is as simple as transferring the archive to a fresh host and running plakar restore. No coordination, no external service, no upstream verification required.\nCompliance and Legal Retention # Each .ptar is immutable (tamper-evident), signed, and traceable. Snapshots inside the archive retain their metadata, timestamp, and audit trail — making .ptar a strong fit for GDPR, HIPAA, and internal data retention policies. It could become a legally-verifiable record of state.\nDistribution and Transfer # Need to ship a dataset or backup across environments, air gaps, or legal zones?\n.ptar packages it up in a single file, preserving structure, permissions, and history. You can hand it over safely, knowing the contents can be validated — but not altered.\nHow to Create and Use a .ptar (hand-holding) # A .ptar builder and reader are implemented into plakar, so creating and interacting with archives doesn’t require extra tooling. Everything happens via the CLI.\nCreate a .ptar from local directories # The following command creates an encrypted snapshot of my ~/Downloads directory into the file downloads.ptar:\n$ plakar ptar -o downloads.ptar ~/Downloads passphrase: passphrase (confirm): The resulting file has its content deduplicated, compressed, encrypted and is self-verifying. No need to bundle external metadata or config files.\nA non-encrypted version can be produced by passing the -plaintext option:\n$ plakar ptar -plaintext -o downloads.ptar ~/Downloads Browse archive contents (no extraction needed) # A .ptar can be browsed without extracting the actual data.\n$ plakar at downloads.ptar ls repository passphrase: repository passphrase (confirm): 2025-06-24T00:04:01Z 3055ddc3 12 GB 34s /Users/gilles/Downloads $ plakar at test.ptar ls 3055ddc3:media/ 2025-05-03T19:36:29Z drwxr-xr-x gilles staff 736 B audio $ plakar at test.ptar ls 3055ddc3:media/audio | grep hiphop 2025-05-03T19:15:39Z -rw-r--r-- gilles staff 4.6 MB hiphop1.mp3 2025-05-03T19:18:35Z -rw-r--r-- gilles staff 4.3 MB hiphop2.mp3 $ This gives you a tree view of all files, snapshot info, timestamps, and version diffs — similar to ls, but scoped inside the archive.\nOf course, you can also use the UI, providing you with a local web-based filesystem browser, preview, search and more:\n$ plakar at downloads.ptar ui repository passphrase: repository passphrase (confirm): Inspect a single file # Inspecting a single file is as simple as using cat on a specific snapshot:file, as shown below:\n$ plakar at downloads.ptar cat 3055ddc3:dragon.txt repository passphrase: repository passphrase (confirm): ___====-_ _-====___ _--^^^#####// \\\\#####^^^--_ _-^##########// ( ) \\\\##########^-_ -############// |\\^^/| \\\\############- _/############// (@::@) \\\\############\\_ /#############(( \\\\// ))#############\\ -###############\\\\ (oo) //###############- -#################\\\\ / \u0026#34;\u0026#34; \\ //#################- -###################\\\\/ (_) \\/###################- _#/|##########/\\######( \u0026#34;/\u0026#34; )######/\\##########|\\#_ |/ |#/\\#/\\#/\\/ \\#/\\##\\ ! \u0026#39; ! /##/\\#/ \\/\\#/\\#/\\| \\ ||/ V V \u0026#39; V \\\\#\\##\\ ~~~ /##/##/ V \u0026#39; V V \\| ||| \\ \\| | \\| | \\\\#\\| |/##/##/| |/ | | / / ||| ||| |_|_|___|_|___|/##\\___/##/##/|_|_|__|_|_| ||| ||\\ .---. .---. /###/ \\###\\ .---. .---. /|| \\ | | | | | |###/ \\###| | | | | | / \\| | | | | |#/ \\#| | | | | |/ |_| |_| |_| |_| |_| |_| Useful for quickly validating what’s in a backup without extracting or restoring the whole archive.\nRestore files from archive # $ plakar at test.ptar restore -to ./recovery /etc/nginx/nginx.conf You can restore full trees, subdirectories or single files.\nSync into a regular kloset # .ptar can also be used as a mean to import data into a regular kloset:\n$ plakar at /var/backups sync from test.ptar This makes it easy to sync or relocate backups between machines, zones, or storage tiers: you can export some or all snapshots to a .ptar archive, then use that .ptar as a sync source for the target machine.\nConclusion # The .ptar format was built because we needed to be able to export Kloset stores, and traditional archive formats weren’t designed for encrypted, deduplicated, or versioned data.\nWith .ptar, you get a single file that’s encrypted, immutable, portable, and fully self-contained. You can inspect it, restore from it, verify it — without needing a running server, a database, or even internet access. It works offline, across systems, and under stress. If you\u0026rsquo;re archiving sensitive environments — production configs, logs, legal data — .ptar ensures the archive is both safe to store and safe to move, even across untrusted systems.\nIt is more than just a backup format. It’s shaping up to be a foundation for portable, secure data packaging — usable in CI/CD, air-gapped delivery, or regulatory environments where traditional tooling just can’t keep up.\nIf you\u0026rsquo;re already using Plakar, generating a .ptar is one CLI call. If you\u0026rsquo;re building disaster recovery workflows, compliance retention, or cold storage pipelines, .ptar gives you a format you can trust — ten years from now, on any machine, without any surprises.\nThis isn\u0026rsquo;t a better .tar, it\u0026rsquo;s a new tool, for a different era (another catchy phrase !).\n","date":"27 June 2025","externalUrl":null,"permalink":"/posts/2025-06-27/it-doesnt-make-sense-to-wrap-modern-data-in-a-1979-format-introducing-.ptar/","section":"Plakar Blog","summary":".ptar is our own archive format, a self-contained kloset, a container for your data. You end up with a standalone file that provides deduplication, compression, encryption, with all of the fancy features of a kloset store!","title":"It doesn't make sense to wrap modern data in a 1979 format, introducing .ptar","type":"posts"},{"content":"Hello everyone,\nToday we’re shipping a small minor release: v1.0.2.\nIt brings an automatic security-update checker, fixes a relative-path bug when using your agent, and delivers a 60× speed boost + lower memory usage on S3 backups.\nSince many of you rely on S3 daily, we couldn’t wait until a larger release—here’s how to grab it.\nVia Go:\n$ go install github.com/PlakarKorp/plakar@v1.0.2 Via installer:\n$ curl https://plakar.io/install.sh | sh Downloading plakar 1.0.2 signature file... Downloading plakar 1.0.2 release file... Downloading plakar 1.0.2 public key... Verifying plakar 1.0.2 signature... Signature Verified plakar-1.0.2.tar.gz: OK Extracting plakar 1.0.2 release... Installing plakar 1.0.2... What\u0026rsquo;s in that release ? # Nothing too big but four fixes, two being interesting enough for our community that we want them available to all as soon as possible.\nSecurity check # Plakar can now check the releases atom feed of our project and warn you if a newer release carries a reliability or security fix that you should really install.\nWe think it\u0026rsquo;s a good option to have for most users, but if you\u0026rsquo;re not comfortable with this check, you can turn it off permanently using plakar -disable-security-check and live on your own adventurous path.\nFix for relative pathnames through agent # We spotted a small bug when using relative paths for a repository through the agent.\nPlakar has no problem resolving relative paths for backups, however if the path of the repository was relative AND the CLI was executed from a directory different from the agent, then the CWD was incorrectly resolved and the relative path resolution failed.\nFor example, plakar at ./foobar backup could mean /tmp/foobar on the CLI but ~gilles/foobar on the agent.\nThis is now fixed !\nS3 speed improvements # We have identified a better way for our importer to iterate over \u0026ldquo;directories\u0026rdquo; in S3 buckets.\nLong story short, in S3 there\u0026rsquo;s no concept of directories: we use the / character as a path separator and pretend that it creates nested objects when they are really flat at the top level.\nWhat we had initially overlooked is that the MinIO client supports a recursive listing option, which handles the prefix-based “directory” emulation more efficiently than our custom logic. This drastically improves the performances of listing resources.\nIn his test case, he observed an impressive x60 performance boost on a backup over S3, reducing a backup that took ~14 minutes to ~13 seconds.\nThis by itself was worthy of a minor release :-)\nS3 memory improvements # One of our users, @anbcorp reported an OOM kill when doing backups.\nWe setup a few tests and quickly identified that it only affected the S3 storage backend, but oddly enough it didn\u0026rsquo;t trigger on my machine or that of another developer. It only took a few minutes of profiling to figure out what was happening:\nWhen we do not know the size of a file, the minio client library does a big allocation (512MB). The problem is that not only do we not know the size of some files before they are pushed, but we also parallelize quite a bit, leading the client library to perform a lot of big allocations.\nThis isn\u0026rsquo;t an issue when you have a few number of cores (low parallelism) or a large amount of memory (can handle the big allocations), but if you have a large number of cores and low memory (that was @anbcorp\u0026rsquo;s case), you end up parallelizing a large number of huge allocations until you get terminated.\nThis only affects one specific call in the storage backend, which is the only one where we don\u0026rsquo;t know the size of files and use heavy parallelization, but we found a fix that solves this issue and manages to avoid the code path that does the big allocations.\nThis was confirmed to fix the issue, and was also by itself a good reason to make a minor release.\nWhat\u0026rsquo;s next ? # We\u0026rsquo;ll be talking about the capabilities of PTAR, our own archiving format, very soon on this blog.\nStay tuned !\nCommunity # Want to help us ?\nStar us on Github (yes, it matters !) Join our developers and users on Discord Special thanks to all early users, contributors, and supporters who have helped shape this first release—your involvement has been invaluable!\n","date":"3 June 2025","externalUrl":null,"permalink":"/posts/2025-06-03/plakar-v1.0.2-was-released-mostly-s3-improvements/","section":"Plakar Blog","summary":"Plakar v1.0.2 adds an automatic security check for new critical releases. It also fixes relative-path resolution when using an agent, plus delivers dramatic S3 performance and memory improvements for faster, more reliable backups.","title":"Plakar v1.0.2 was released: mostly S3 improvements!","type":"posts"},{"content":"Last update on: 06/05/2025\nPublisher # Plakar\nThe website accessible at https://plakar.io/ (the “Website”) is published by PLAKAR, a simplified joint-stock company (société par actions simplifiée) with a capital of €1,289.86, registered with the Paris Trade and Companies Register under number RCS Paris B 933 509 754, and whose VAT number is FR79933509754 (hereinafter referred to as “Plakar”).\nThe company’s registered office is located at 149 avenue du Maine, 75014 Paris, France.\nCredits # Publication Director: Spark Angle B.V. represented by its legal representative, Mr. Julien Mangeard.\nWebmaster: Plakar\nDesign \u0026amp; Development: Plakar\nHost: KANDBAZ SAS, 1 rue de Stockholm, 75008 Paris, France, +1 (415) 802-2316.\nContact: help@plakar.io\nPurpose # Plakar is a company specializing in secure, efficient, and open-source data backup solutions. Its platform is designed to help individuals and organizations protect critical data against loss, ransomware, and operational failures, through immutable, encrypted, and deduplicated backups. With a focus on simplicity, performance, and compliance, Plakar provides a robust and developer-friendly solution for storing, browsing, and restoring data across diverse environments. The website aims to showcase the technology, explain its benefits, offer access to the open-source project, and provide documentation and support resources for users, developers, and partners.\nHypertext links # Plakar declines all responsibility for websites accessible from its own Website. The hypertext links placed on this Website that lead to other internet resources, including those of partners, are clearly identified and have been the subject of prior information and/or authorization from the targeted websites.\nPlakar undertakes to remove any hypertext links upon the first written request from the owners of the linked websites. However, hypertext links directing users to other online resources do not engage Plakar’s liability.\nIntellectual property # The Website and all its components (hereinafter, the “Intellectual Property Elements”), including but not limited to software, structures, infrastructures, databases, and all types of content (texts, images, visuals, logos, trademarks, etc.) edited or used by Plakar, are protected under applicable intellectual property laws.\nAll users acknowledge and agree that the Intellectual Property Elements, including all associated intellectual property rights, are the exclusive property of Plakar.\nAny reproduction or representation, in whole or in part, of the Intellectual Property Elements without Plakar’s authorization is strictly prohibited.\nLikewise, unless prior written authorization is obtained from Plakar, users are prohibited from using, reproducing, adapting, modifying, creating derivative works from, distributing, licensing, selling, transferring, publicly displaying, transmitting, broadcasting, or otherwise exploiting the Intellectual Property Elements in any manner.\nPersonal data # Plakar does not collect any personal data in connection with browsing the Website. However, Plakar may process personal data for purposes such as processing an order, identifying a technical support request, responding to correspondence, providing a subscription, or handling a job application. If applicable, Plakar commits to complying with the relevant personal data protection regulations.\nhttps://plakar.io/privacy-policy/\nCookies # Cookies are text files stored on your computer, tablet, or phone when you visit a website or application. When visiting the Website, Plakar uses Google Analytics cookies to understand how users arrive at, navigate, and interact with the Website to improve its services. These cookies collect information in a manner that does not directly identify the user or their device, nor do they track browsing activity across other websites. They are also automatically deleted when the user closes their browser.\n","date":"5 May 2025","externalUrl":null,"permalink":"/legal-notice/","section":"Plakar | The Open Standard for Backup and Restore","summary":"","title":"Legal Notice","type":"page"},{"content":"Effective date: October 8, 2024\nWelcome to the website (the “Site”) of Plakar (“Plakar,” “we,” “us,” or “our”). Plakar is a technology company that provides an open-source backup solution, as well as hosted and supported services (collectively referred to as the “Services”). We also maintain an active developer and user community (the “Community”).\nThis privacy policy describes how we handle personal data collected through your use of our Services. Depending on how you interact with us — through our website, product, or community — the type and purpose of data collection may vary. Regardless, you always have rights and choices regarding your personal data as explained below.\nBy using the Site or accessing any of our Services, you agree to the terms outlined in this Privacy Policy. If you do not agree, please discontinue use of our Services.\n1. What Information We Collect # Personal Data You Provide # We may collect the following categories of personal information that you provide directly:\nContact Information: name, email address, mailing address, and phone number when you interact with us (e.g., contact forms, newsletter signup, job applications). Account Data: credentials and preferences if you create a user account. Support Communications: data you submit when contacting our support or requesting assistance. Community Contributions: information you share when participating in our Community (forum posts, project contributions, comments, etc.). Social Media Information # When you interact with our pages on LinkedIn, Twitter, GitHub, or similar platforms, we may collect public information you make available, as well as aggregated analytics from those platforms.\nAutomatically Collected Data # We use cookies and analytics tools to collect:\nDevice Information: type, operating system, IP address, browser, language preferences. Usage Data: how you navigate and use the Site, including pages viewed and interactions. Email Engagement: whether emails from us are opened or clicked. Cookies and Tracking Technologies # We use:\nCookies and local storage (e.g., for remembering preferences) Web beacons (to track site or email usage) Analytics tools such as Google Analytics or privacy-focused alternatives Data Processed on Behalf of Customers # For paid or managed backup services, we may process encrypted user data. In such cases, Plakar acts as a data processor, not a controller. Any personal data in those backups remains under the control of our customer.\nWe do not use any customer data to train or improve generalized machine learning or AI models.\n2. How We Use the Information # We use your personal information to:\nService Delivery # Operate and maintain the Site and Services Manage user accounts and access Provide technical support and customer service Communication # Respond to inquiries and support tickets Notify you of updates to terms, policies, or features Service Improvement # Understand how users interact with our Services Improve features, usability, and security Legal and Compliance # Comply with legal obligations or requests Detect, investigate, and prevent security breaches or fraud Marketing (Opt-In) # Send updates about product releases, blog posts, events, or offers You can opt out at any time via unsubscribe links or email 3. Information Sharing # We do not sell or rent your personal data. We may share information with:\nService Providers (e.g., hosting, support, analytics) Professional Advisors (e.g., lawyers, accountants) Partners (e.g., if co-hosting events or joint content) Legal Authorities if required by law Affiliates or Successors (e.g., in case of a merger or acquisition) The Community: content you post in public forums may be publicly visible 4. Data Retention # We retain personal information as long as necessary to fulfill the purposes stated in this policy, or longer if required by law (e.g., for tax or contractual obligations).\n5. Your Choices # Update Data: Contact us at privacy@plakar.io to correct your personal data. Opt-Out: You can unsubscribe from marketing emails at any time. Cookie Preferences: You can configure your browser or use our consent banner to manage cookies. Note: Some essential cookies are required for the site to function and cannot be disabled.\n6. Your Rights # Plakar respects your data protection rights. Depending on your location, you may have the right to:\nAccess your personal information Correct inaccurate or outdated information Delete your data (\u0026ldquo;right to be forgotten\u0026rdquo;) Export your data in portable format Object to certain uses of your data If you are located in the EU/EEA, UK, or California, additional rights may apply.\nYou may contact us at privacy@plakar.io to exercise these rights.\n7. Children # Our Services are not intended for children under 18. We do not knowingly collect personal data from minors.\n8. Third-Party Links # Our Site may link to third-party services (e.g., GitHub, social media). Their privacy policies govern any data collected there. We are not responsible for their practices.\n9. Security # We use appropriate technical and organizational measures to protect your personal data, including encryption, access control, and secure hosting. If you have concerns, contact security@plakar.io.\n10. Changes to This Policy # We may update this Privacy Policy to reflect changes to our practices. When we do, we will revise the \u0026ldquo;Effective date\u0026rdquo; above. Continued use of our Services constitutes your acceptance of the revised policy.\n11. Contact Us # For questions or requests related to this Privacy Policy or your personal information, contact us at:\n📧 privacy@plakar.io 🌍 www.plakar.io\n","date":"5 May 2025","externalUrl":null,"permalink":"/privacy-policy/","section":"Plakar | The Open Standard for Backup and Restore","summary":"","title":"Privacy Policy","type":"page"},{"content":"In February, we introduced the first plakar beta release and since then we worked tirelessly to build our first stable release, improving on user feedback and adding new nice features while at it.\nWe\u0026rsquo;re now thrilled to announce the release of Plakar v1.0.1, marking a major milestone in our journey: locking in a stable base that empowers us to experiment, expand, and evolve without compromise, while bringing a ton of exciting new features.\nOur goal is to set a new open-source standard, we won\u0026rsquo;t settle for less:\nJust like everyone agrees that git is a go-to solution for code versioning, that docker is a go-to solution for containers, or that k8s is a go-to solution for orchestration\u0026hellip; we want people to recognize plakar as a go-to solution for backups.\nHow are you going to achieve that ? # To transition from a small project with ideas to a full-fledged business with means, Plakar was officially incorporated in September 2024. We’ve since secured a $3 million pre-seed round to fuel our ambitions: assembling the ideal team (which we did.), building the most expansive ingestion ecosystem possible, and establishing a new open-source standard. Our optional enterprise-grade features will provide the long-term financing needed to sustain innovation and growth on this journey.\nWhat is Plakar? # Plakar is a next-generation, snapshot-based backup and archiving solution built for today’s data-driven workflows, built upon Kloset, our immutable data store.\nIt is built on top of our Kloset immutable store engine which provides it with an intuitive snapshot model and a custom virtual filesystem (VFS), that lets you take fast incremental backups and roam through past versions of your data as easily as browsing files on your desktop.\nWhether you’re protecting personal projects or enterprise-scale datasets, Plakar delivers portability, performance, and security without the typical headaches of traditional backup tools.\nAt its core, Plakar was born out of frustration with legacy solutions that are complex, platform-locked, and painful at handling large or constantly changing data. It strips away that complexity by:\nSnapshot Simplicity: Capture your data state, then only record changes on each run, all with one simple command-line. Portable Archives (.ptar): Package a full repository—or just a slice of it—into a single, deduplicated, optionally encrypted file that you can clone, sync, diff, or mount anywhere plakar is installed. Flexible Backends: Write to local disks, SQLite, S3 buckets, SFTP servers, or any other supported storage with a single command. Built-in Encryption: Protect your backups end-to-end with audited encryption algorithms so your data stays safe in motion and at rest. Why Plakar? # Because backups shouldn’t feel like chores or require armies of specialists. Plakar transforms your archives into active, reusable data assets instead of dormant blobs on a tape. Use snapshots to accelerate AI model training, spin up historical datasets for analytics, or satisfy audit requirements without juggling half a dozen tools.\nKey advantages include:\nZero-Learning-Curve: Familiar file commands (backup, restore, cat, ls, diff, sync, \u0026hellip;) work the way you expect. Maximum Portability: Share a .ptar file with a colleague or migrate a repo across clouds in minutes. Extreme Efficiency: Deduplication and compression minimize storage costs, even for ever-growing datasets. Open Source \u0026amp; Extensible: Join a growing community; inspect, customize, or extend Plakar to fit your environment. And coming soon—watch for PlaSQL, our interactive query layer that lets you search for data within archived snapshots like a live database.\nKey Features of Plakar v1.0.1 # Snapshots With Efficient Deduplication # Quick, incremental, and reliable backups powered by Plakar’s built-in virtual filesystem (VFS) and indexes.\nEffortlessly maintain historical versions without redundancy regardless of how many snapshots you take. A repository can efficiently hold thousands of backups, you don\u0026rsquo;t have to worry about backing up \u0026ldquo;too much\u0026rdquo; or \u0026ldquo;too frequently\u0026rdquo;: more restore points, less space used.\n3-2-1 Backups With Alerting Out Of The Box # Without the need for other external tool, setup right away a 3-2-1 backup strategy: backup a local copy, synchronize to a remote SFTP machine and to Amazon\u0026rsquo;s Glacier in just a couple minutes. If you want an air-gapped copy, you can also use our ptar archive format to export a repository or portion of it to an offline drive.\nIf you don\u0026rsquo;t have the instrastructure for alerting, we also have you covered with our own alerting service provided for free to registered users.\nPortable Archives (.ptar) # PTAR is our immutable archive format engineered for effortless transfer, dependable storage, and straightforward restoration across any environment.\nIt delivers all of kloset’s capabilities—deduplication, encryption, and more—in a single, standalone file that cannot be modified once created, but that can be written to an offline storage or passed to other people.\nYou can package data from any supported source (filesystem, S3, SFTP, etc.) or from another kloset whether it’s to copy an entire repository or just selected snapshots.\nAfter creation, you can use plakar to interact with a .ptar archive exactly like any other repository (read-only), including through our UI for browsing, searching, and previewing !\nMultiple Storage Backends # Flexible support for various storage options, including local disk, SQLite, SFTP, S3, and more—enabling you to adapt Plakar to your existing infrastructure effortlessly.\nPowerful Data Integrations # Easily import and export data from multiple sources and to multiple destinations such as FTP and SFTP servers, S3, and others, simplifying data migration and interoperability.\nSecurity \u0026amp; Encryption # Built-in audited encryption mechanisms ensure robust data protection, maintaining the confidentiality and integrity of your backups, whether stored locally or remotely.\nNot only plakar uses encryption to protect the data within a repository, but it also heavily relies on it to maintain integrity, detect corruption and attempts at altering data by a third-party, and to some extent hide the knowledge of what kind of data is being backed up.\nAn awesome and snappy UI # Use our UI to browsing any kloset repository, local or remote, inspect differences, preview data content before restore, download parts of the kloset, etc\u0026hellip;\nIt provides full preview for a wide variety of files, including PDFs, audios and videos, without the need to fully restore locally.\nAuthentication to our services # If you have your own infra and techies around, you can use plakar fully on-premise and build your tooling without any need to interact with us once the software is installed.\nFor others, we will provide a set of additional services and add-ons backed by our infrastructure. To enable them, you will need to be logged in so we can tie them to your instance.\nAuthentication is painless, no account creation is needed, you either use plakar login on the command line or click on the login button on the UI\u0026hellip; which will either prompt for an e-mail so that we can send you an autologin link or use Oauth to authenticate you against one of your identity providers (Github-only for now).\nThis is by no means a requirement, and if you don\u0026rsquo;t use our services and add-ons then you are absolutely not required to authenticate or even talk to us.\nAlerting and reporting # Backups is often a set and forget process: you setup backups and forget about them until you realise you need them and hope they\u0026rsquo;ll work.\nOur alerting service allows authenticated users to have their plakar send concise task summaries, not actual backup data, which we can then monitor to notify them if tasks start failing.\nThe alerting can be done through the UI, but optionally an alert can also be sent to the e-mail address that backs their identity:\nWork In Progress # Now that we have a stable base to build upon, we will start bringing new storage connectors and data integrations to extend the abilities of plakar.\nWe have lots of plans so I\u0026rsquo;ll only mention a few that will land in an upcoming minor release, making them available soon.\nWe will rely on the community to propose new integrations and setup a voting system to prioritize requests.\nKloset Storage Connectors # Rclone # An kloset storage connector that uses the rclone project as its transport layer, allowing plakar to host klosets on a variety of popular services.\nKloset Data Integrations # Stdio # An integration that imports data from standard input and/or exports data to standard output. It is suitable to ingest database dumps (ie: pg_dump, mysqldump, mongodump, \u0026hellip;) and restore back to an ingestion tool.\nIMAP # An integration that imports/exports mail from/to an IMAP server. Suitable to backup your mail server or mail account at your mail provider.\nNotion # An integration that backs up and restores the content of a notion.so accounts, saving your precious data.\nRclone # An integration that uses the rclone project as its transport layer, allowing plakar to import and export data to several popular services, including Google Drive, Google Photos, OneDrive, \u0026hellip;.\niPhoto # An integration currently limited to importing photos from iPhoto. This is a work in progress in early stage despite having a functional proof of concept.\nMulti # The multi integration chains multiple integrations, allowing to bundle into a single snapshot data coming from several sources, and restoring them together.\nIt is intended to cover cases like Wordpress, Joomla, Owncloud,\u0026hellip; where a backup is both a filesystem (fetched through FTP or SFTP) and a database. In this scenario, a multi-integration snapshot would hold both the filesystem and database backup tied together.\nWhat\u0026rsquo;s Next for Plakar ? # We\u0026rsquo;re actively working to enhance Plakar with new features and capabilities.\nOur immediate roadmap includes:\nReleasing an SDK so you can easily bring support to new storage and integrations effortlessly. Extended support for import/export plugins. Enhanced performance and incremental indexing for even faster backups and restores. Advanced diffing and snapshot browsing interfaces. Dashboards for better visualization of repository content and activity. A first batch of enterprise features: snapshots signing, multi-user with ACL support, \u0026hellip; We value community involvement highly and welcome your ideas, feedback, and contributions. Please join us by participating in GitHub issues, discussions, or our community forums.\nCommunity # Get started with Plakar today and become part of our community:\nTry Plakar now with our quickstart Join our developers and users on Discord Special thanks to all early users, contributors, and supporters who have helped shape this first release—your involvement has been invaluable!\n","date":"1 May 2025","externalUrl":null,"permalink":"/posts/2025-05-01/introducing-plakar-v1.0-to-redefine-open-source-data-protection-with-3m-funding/","section":"Plakar Blog","summary":"Immutable, queryable, encrypted snapshots with context and integrity — Kloset redefines how data is stored, verified, and reused","title":"Introducing Plakar v1.0 to redefine Open-Source Data Protection with $3M funding","type":"posts"},{"content":"On the surface, plakar may appear as just another backup tool: it takes data and safely stores it until restoration is needed—essentially, a sophisticated version of the cp command.\nYet beneath this simplicity lies the powerful Kloset engine, designed to package data along with its context, structure, metadata, and integrity—much like containers bundle applications with their dependencies. The versatility of Kloset allows it to address numerous specialized scenarios:\nReliable backup and restoration Long-term archiving Secure log retention for compliance and audits Preservation of legal evidence Versioning of datasets for machine learning Digital authenticity proofs for contracts and media Integrity assurance in software supply chains And likely many other applications waiting to be discovered.\nPlakar leverages Kloset’s capabilities to create compact, immutable, secure, and transparent backups. It seamlessly integrates with diverse storage solutions including filesystems, databases, object storage, and distributed platforms, without relying on external state or centralized coordination. Our vision: back up anything, store anywhere, restore everywhere.\nThis post introduces the Kloset architecture, explains its design decisions, and outlines how it enables features like incremental backups, versioning, deduplication, granular restores, and verifiable integrity — all with minimal system footprint.\nKey concepts:\nImmutable Storage: Data that, once stored, cannot change. Content-Addressable Storage (CAS): Data stored and retrieved by a unique identifier derived from its contents (hash). Virtual Filesystem (VFS): An abstraction representing data in a structured, hierarchical format that mimics a traditional filesystem. Core Architecture \u0026amp; Principles of Kloset # Kloset was built with a clear set of non-negotiable goals: backups must be immutable, fully encrypted at the source, efficient even at scale or over slow links, browsable without full restores, portable, and verifiable without relying on external metadata.\nconsole browser Feature Kloset Rsync Tarballs Volume snapshot opensource competitors Immutable Backups ✅ ❌ ❌ ✅ (limited) ✅ Incremental Efficiency ✅ ✅ (suboptimal) ❌ ✅ ✅ Granular Restore ✅ ✅ (partial) ❌ ✅ (limited) ✅ Data context ✅ ❌ ❌ ✅ ✅ Browsable ✅ ✅ ❌ ✅ ✅ (limited for some) Encryption ✅ Built-in ❌ (manual) ❌ (manual) ❌ (manual) ✅ (limited for some) Self-contained ✅ ❌ ✅ ❌ ✅ (limited for some) Data indexing ✅ ❌ ❌ ❌ ❌ Typed Snapshots ✅ ❌ ❌ ❌ ❌ Archive format ✅ ❌ ✅ ❌ ❌ Multi-source ✅ ❌ ❌ ❌ ❌ Multi-target ✅ ❌ ❌ ❌ ❌ Traditional approaches like tarballs, rsync deltas, or volume snapshots couldn\u0026rsquo;t meet these requirements. We needed a model that could, so we chose to represent all data internally as a virtual filesystem. It’s a flexible, expressive abstraction that fits any data source — filesystems, databases, APIs — and supports efficient storage, indexing, and recovery.\nModern Differentiators # Modern backup solutions have improved but often still fall short in areas Kloset addresses directly:\nTyped Snapshots: Kloset snapshots can represent different logical types (e.g., filesystems, databases, object stores) rather than just raw files. Portable Archive Format (PTAR): Export any snapshot, or snapshot collection, as a fully self-contained portable archive for easy distribution, offline storage, or transfer. Built-in Indexing and Analyzing: Snapshots are natively indexed and searchable; you can query the contents without restoring them first. Multi-source Backups: A single backup can aggregate data from multiple sources (e.g., local files, S3 buckets, and a Postgres database) into one coherent snapshot. Multi-target Restoration: Restore one snapshot to multiple destinations simultaneously, with native format translation when necessary. Cryptographic Auditing: Built-in tamper detection and independent verifiability at every level (chunks, files, metadata). Core Model # The idea is simple:\nRead data from any source and map it to a virtual filesystem Store that filesystem efficiently using immutable, content-addressed chunks Later, read it back and export it to any compatible target Everything in Kloset is structured around this model, and the system is guided by five key principles:\nImmutable Storage\nData is split into deduplicated, compressed, encrypted chunks and stored by content hash. Once written, it’s never modified.\nSelf-Describing Snapshots\nSnapshots include all the metadata needed to understand the structure and context of the backup. They’re portable and browsable without external dependencies.\nPluggable Connectors\nSources and targets are modular. Kloset doesn’t care what it’s backing up — only that the data can be listed and read.\nGranular, Stateless Access\nYou can inspect or restore a specific file or folder without reading the whole backup or loading everything into memory.\nCryptographic Auditability\nEach snapshot carries a digest tree for data and metadata, making it independently verifiable and tamper-evident.\nThis architecture lets Kloset scale from small local backups to massive cloud deployments, all while remaining safe, fast, and flexible.\nHere comes Kloset # Kloset, our immutable data store engine, is in charge of abstracting the problem into smaller problems and providing solutions.\nStorage Layer # At the foundation of Kloset is the storage layer, responsible for writing and retrieving streams of raw, opaque bytes at specific storage locations. The term opaque is important: by the time data reaches this layer, it has already been compressed and encrypted, meaning the storage layer has no visibility into its structure or semantics.\nDespite this, the storage layer plays a critical role in organizing opaque data meaningfully and predictably. It ensures that each stream is stored in a consistent, content-addressed layout, enabling efficient lookup, deduplication, and long-term stability — even without needing to interpret the data.\nAll data written by Kloset is strictly immutable: once a stream is stored, it is never modified. This simplifies concurrency, ensures consistency, and supports optimizations such as caching, multi-writer safety, and low-overhead synchronization.\nThe storage layer is designed for scalability and performance. It is fully parallelized, able to handle concurrent reads and writes across multiple streams or backends, and is backpressure-aware (adapts to storage speeds to avoid overloading), adapting its behavior to the performance and throughput of the underlying storage system to avoid resource overuse.\nImportantly, the storage layer is built on a pluggable connector model, allowing it to interface with different backend systems such as filesystems, SFTP servers, or cloud object stores. These connectors abstract the details of connection and data transfer, and will be described in more detail later in this article.\nRepository Layer: Encoding, Decoding And Indexing # Sitting directly above the storage layer is the repository layer, a logical component that acts as a local encoding/decoding proxy for all data entering or leaving the system.\nDuring a backup operation, data is not sent directly to storage. Instead, it flows through the repository layer, which compresses and encrypts it locally before passing it to the storage layer for immutable persistence. Similarly, during inspection or restore, data is retrieved through the repository, which decrypts and decompresses it before exposing it in cleartext to higher layers.\nThe repository layer provides a \u0026ldquo;local view\u0026rdquo; of the stored data—meaning the data is now readable and decoded, but still unstructured. For example, file contents and associated metadata may both be available, but the repository doesn’t establish any relationship between them; that responsibility is deferred to higher layers like snapshots or virtual filesystems.\nImportantly, the repository layer also maintains a local index of content already present in storage. This index allows Kloset to avoid redundant writes and enables fast existence checks and lookups without needing to query the underlying storage repeatedly. As a result, most operations—like checking if a chunk is already stored—are handled locally and efficiently.\nBecause it is built directly on top of the storage layer, the repository inherits its scalability, performance characteristics, and backpressure-aware behavior. It can parallelize encoding and I/O, adapt to backend throughput, and handle high-volume workloads smoothly. A single repository instance can support multiple concurrent writers during backup or multiple concurrent readers during inspection and restore.\nIn essence, the repository is a stateless, high-performance gateway between structured user data and raw storage—handling transformation, indexing, and deduplication in a clean, composable layer.\nSnapshot Layer: Structuring Data \u0026amp; Metadata # At the top of the Kloset architecture lies the snapshot layer, responsible for giving structure and meaning to the raw data exposed by the repository.\nWhile the repository layer provides access to decoded but unstructured data chunks, the snapshot layer organizes them into coherent groups of related data that represent a backup at a specific point in time. Each snapshot captures a complete view of a dataset, including:\nSnapshot type (filesystem, database, application, \u0026hellip;) Indexing for faster search Tree hierarchy for browsable objects, files, directories, \u0026hellip; Metadata (timestamps, permissions, ownership, etc.) Content hashes for deduplication and integrity Logical relationships and structure reconstructed from the raw data Snapshots enable features like incremental backups, change tracking, and historical inspection, by identifying and indexing what has changed between backup operations. They do not modify existing data but instead refer to immutable content already stored in the repository.\nBy abstracting the low-level details, the snapshot layer makes the data navigable, queryable, and restorable, allowing higher-level tools and users to interact with backups in familiar, structured ways—such as restoring a folder or comparing versions of a file.\nInternally, the snapshot layer is built on top of a custom virtual filesystem, designed to model files, directories, and their relationships in a flexible and efficient way. This virtual filesystem serves as the foundation for snapshot representation and will be discussed in more detail in the following sections.\nStorage bridge: Universal Backend Integration # Storage connectors are pluggable components that allow Kloset to interface with a variety of backend storage systems, such as local filesystems, SFTP servers, or S3-compatible object stores.\nEach connector is responsible for handling the specifics of connecting to its backend and exposing a minimal, unified interface that supports:\nListing content Writing data streams Reading partial data from a given offset This abstraction enables Kloset to interact with all supported storage backends in a consistent and backend-agnostic way, regardless of the underlying protocols or infrastructure. Since all data at this level is immutable, compressed, and encrypted, connectors operate purely on opaque byte streams—without needing to understand or interpret the data\u0026rsquo;s meaning.\nThanks to this simplicity, implementing a new storage connector can be very lightweight—a basic connector may require only a few hundred lines of code, making it easy to extend Kloset’s compatibility with new storage platforms.\nData bridge: Bridging External Data # Source and target connectors act as bridges between the external world and Kloset’s internal snapshot layer, enabling seamless data ingestion and restoration.\nSource connectors are responsible for scanning external data sources—such as filesystems, databases, or remote APIs—to discover their structure and expose their contents in a readable, consistent format. This allows the snapshot layer to construct a structured, local representation of the source, effectively capturing it as a backup.\nTarget connectors, in contrast, take a snapshot and translate it back into the format expected by the destination system. This might involve reconstructing files and directories on disk, restoring cloud objects, or pushing data into a remote service.\nIn addition to data transport, connectors also provide a visualization layer, offering intuitive representations of external data. This capability enhances user understanding and simplifies the management of backup and restoration workflows.\nBecause these connectors operate purely at the transport level, they do not need to deal with encoding, encryption, or storage internals. This makes them simple and lightweight to implement, often requiring only minimal code to support a new source or target system.\nBy clearly separating data capture and restoration responsibilities, source and target connectors allow Kloset to integrate with a wide range of systems—handling everything from simple file trees to complex, domain-specific data sources—without impacting the core snapshot logic.\nVirtual Filesystem (VFS): Efficient Snapshot Navigation # Kloset includes a custom-built virtual filesystem (VFS) that models snapshot data as a structured hierarchy of files and directories. It is designed to implement the semantics of Go’s standard fs.FS interface, making it easy to integrate with existing Go tooling and libraries that expect filesystem-like behavior.\nUnder the hood, the VFS is backed by a custom B+Tree, offering efficient, ordered lookups and traversal—ideal for managing large datasets with deep directory structures or for running range-based queries (like path prefix scans or diff operations).\nKey Features\nGo fs.FS compatibility: Exposes a familiar and idiomatic interface for navigating and accessing virtual filesystems in Go. Efficient B+Tree backend: Provides fast insertions, lookups, and ordered traversal, which is essential for snapshot indexing and comparison. Lazy loading \u0026amp; low memory footprint: The VFS does not need to reside fully in memory. Instead, only the portions actively being worked on—such as a directory being browsed or a file being read—are loaded on demand. This design keeps memory usage light, even when handling large or complex snapshots. Immutable view: Files and directories in the VFS represent snapshot data and are never modified after creation, supporting reliable, consistent access patterns. Portable and stateless: Being decoupled from the host filesystem, the VFS operates entirely in user space and can be safely used for serialization, inspection, or streaming. By offering a structured, navigable, and resource-efficient interface to snapshot contents, the VFS forms the backbone of higher-level features like restores, comparisons, and virtual browsing—all without sacrificing performance or portability.\nAdvanced Features \u0026amp; Capabilities # Indexes: Powerful Query \u0026amp; Inspection # While data blocks are being stored, Kloset simultaneously builds a structured metadata index that captures everything about the snapshot’s logical content and context. This index is not an afterthought — it is a first-class citizen in the architecture, enabling powerful querying, filtering, and introspection.\nWhat the Metadata Captures\nStructure\nThe complete structure of the dataset — including nested directories, symbolic links, and hard links for filesystems — is recorded in a virtual filesystem model.\nApplication-Specific Context\nSource connectors can embed domain-aware metadata (e.g., database schema details, mount point info, or volume names), enabling deep inspection of structured data.\nPermissions, Timestamps, Ownership\nFull POSIX-style metadata is preserved, making it possible to restore not just content, but also its exact execution and access semantics.\nTags, Labels, Logical Groupings\nSnapshots and their contents can be annotated with logical metadata — including tags, labels, or policy hints — allowing for advanced filtering and lifecycle management: organising snapshots per business units, production vs staging, etc\u0026hellip;\nThe metadata engine enables in-place querying and comparison of snapshots, without needing to restore or unpack them. Example use cases:\nDiffing:\n“What changed since snapshot X?” — quickly compute file-level or directory-level diffs.\nFiltering:\n“List all .sql files created last week.”\nAudit \u0026amp; Forensics:\n“Show me every modified file in /etc since Tuesday.”\nRestore Planning:\n“Preview only tagged files for restore to a staging environment.”\nBecause the metadata engine is tightly integrated with the snapshot and repository layers, all of this functionality comes with no performance penalty and requires no unpacking or external indexing. It’s ready the moment a snapshot is completed.\nOn-Demand Inspection \u0026amp; Granular Restore # Kloset enables fast, precise, and flexible restoration workflows by leveraging the same metadata structures and block indices used during backup. Instead of requiring full unpacking or temporary reconstruction of a snapshot, Kloset supports on-demand access to exactly the data you need.\nKey Capabilities\nBrowse Without Restoring\nSnapshots can be navigated like a live filesystem, thanks to Kloset\u0026rsquo;s virtual filesystem and metadata index. You can explore the full hierarchy, list files, and view metadata without restoring a single byte of content.\nGranular, Targeted Restores\nNeed just a single database table or config? A file or folder? Content of an S3 object? Kloset allows partial restores by streaming only the minimal required blocks to reconstruct the requested content — no full snapshot unpacking, no overhead.\nFormat-Agnostic Output\nRestored data doesn’t need to return to its original format or location. Kloset supports restoring to alternative targets, such as converting a filesystem snapshot to S3 objects, or extracting structured data into a different format. This makes it ideal for cross-system workflows.\nDry-Run Support (coming soon)\nBefore executing a restore, you can preview exactly what would be restored, down to individual files and blocks. This makes it easy to validate operations, plan migrations, or inspect backups for completeness without triggering any changes.\nBy avoiding full snapshot reconstruction and streaming only the required data, Kloset keeps restores fast, lightweight, and stateless — whether you\u0026rsquo;re retrieving a single file or rerouting a dataset to a new system.\nEfficient Snapshot Cloning \u0026amp; Synchronization # Kloset supports efficient snapshot cloning and synchronization between instances or across storage backends. This functionality is built directly into the engine and is designed to be deduplication-aware, incremental, and portable, making it ideal for modern distributed environments.\nKey Features\nDeduplication-Aware Sync\nDuring synchronization, only new or missing blocks are transferred. Kloset uses content-addressed storage to detect overlap between source and destination, ensuring that redundant data is never copied. This makes cloning bandwidth-efficient and highly scalable, even across large or historical datasets.\nIncremental \u0026amp; Asynchronous\nSync operations work incrementally, transferring data in small batches and optionally running asynchronously in the background. This allows long-lived sync jobs to resume without starting over, and makes it possible to keep remote instances or backups up to date over time with minimal effort.\nNetworked or Offline Operation\nSynchronization can occur over a network, or entirely offline using portable snapshot bundles. This is particularly useful for disconnected or bandwidth-constrained environments where snapshots are first seeded via physical transfer, then updated later through delta sync.\nSnapshot-Level Integrity \u0026amp; Reconciliation\nKloset performs cryptographic reconciliation during sync, verifying snapshot digests and metadata trees on both sides to ensure consistency. It guarantees that cloned snapshots are identical in structure and integrity to the source.\nThis syncing model unlocks a wide range of real-world scenarios:\nEdge-to-Core Synchronization: Push backups from edge devices or remote workers to a central data center or cloud archive. Multi-Cloud Disaster Recovery: Replicate snapshots across providers or regions for resilience and failover. Offline Seeding: Preload a snapshot in an isolated environment, then reconnect later to sync deltas. By combining snapshot immutability, content-addressing, and portable metadata, Kloset enables safe, efficient, and verifiable movement of data between systems — without relying on fragile heuristics or full re-transfers.\nBuilt-In Security \u0026amp; Cryptographic Integrity # Kloset is designed with security as a foundational, non-optional feature. Its security layer is always enabled, woven directly into the architecture, not layered on top as an afterthought. Its cryptographic design has been reviewed by an independent auditor and project is maintained by developers with strong understanding of security concepts.\nThe system guarantees confidentiality, integrity, and traceability for all data at all times — during backup, storage, and restore.\nKey Security Features\nEnd-to-End Encryption (in transit and at rest)\nAll data is encrypted at the source before it ever leaves the client, using modern cryptographic primitives. Data remains encrypted throughout transit and is stored in its encrypted form at rest. No plaintext ever touches the storage backend, and no intermediate component ever sees unencrypted content — not even connectors.\nPeriodic Integrity Verification\nKloset supports configurable integrity scans, which re-verify stored chunks against their cryptographic digests. These can be triggered manually, scheduled periodically, or run automatically as part of a maintenance workflow. This ensures long-term consistency and allows early detection of silent corruption or backend issues.\nCryptographic Snapshot Manifests\nEvery snapshot includes a self-contained manifest of all its files, directories, metadata, and lineage — cryptographically signed and hashed. This allows Kloset to reconstruct snapshot history, verify parent/child relationships, and detect tampering or unauthorized modifications. Snapshots are cryptographically provable.\nAppend-Only Audit Logs (coming soon)\nKloset maintains an append-only log of operations, where each action (backup, restore, delete, verify, etc.) is recorded with a digest that links to previous entries. This creates a verifiable chain of custody — a tamper-evident audit trail that can be independently inspected and reconciled.\nBuilt for Compliance # These security properties make Kloset an ideal fit for regulated environments where auditability and data protection are mandatory. Snapshots can be stored and restored in a way that satisfies legal and operational requirements under frameworks such as:\nGDPR (General Data Protection Regulation) HIPAA (Health Insurance Portability and Accountability Act) NIS2 (EU Network and Information Security Directive) Because encryption is built-in and metadata is cryptographically verifiable, compliance is not bolted on — it is enforced by design. Snapshots can be used as cryptographic proofs of backup validity, integrity, and lineage, making Kloset suitable for organizations with strict governance, forensic, or retention obligations.\nLightweight \u0026amp; Embeddable by Design # Kloset is not a standalone application — it is a lightweight, embeddable backup engine designed to be integrated into purpose-built executables such as plakar. Its focus is on portability, simplicity, and robustness, making it easy to embed in backup tools, storage utilities, or automation pipelines.\nKloset imposes no runtime requirements and requires no root privileges or external services. It is meant to run safely and efficiently in user space, and is optimized for diverse environments, from servers and containers to resource-constrained systems.\nHow It Fits Into Plakar # Plakar uses Kloset as its engine — the component that all user-facing commands (via CLI or UI) ultimately call into. Whether you’re triggering a backup, restoring a path, inspecting differences between two versions, or syncing to a remote — it’s Kloset doing the work underneath.\nWhat\u0026rsquo;s Next for Kloset \u0026amp; Plakar? # Kloset is fully integrated into the Plakar beta and will be getting:\nA public plugin SDK (connectors, interpreters, backends) More inspection tools (diffs, browsing, search) Formal spec and manifest format for snapshot interoperability Curious about Kloset?\nExplore Plakar, join our community on Discord and start building your own connectors and integrations.\n","date":"29 April 2025","externalUrl":null,"permalink":"/posts/2025-04-29/kloset-the-immutable-data-store/","section":"Plakar Blog","summary":"Immutable, queryable, encrypted snapshots with context and integrity — Kloset redefines how data is stored, verified, and reused","title":"Kloset: the immutable data store","type":"posts"},{"content":" In honor of World Backup Day – March 31 # March 31 is World Backup Day a global reminder that your data isn\u0026rsquo;t immortal, your hard drive will eventually die, and “I thought I saved it” isn\u0026rsquo;t a recovery strategy.\nAnd no your photos aren’t actually backed up on iCloud; they\u0026rsquo;re just synced. Google and Microsoft don’t truly back up your emails; that\u0026rsquo;s your responsibility. Slack can and will lose your precious conversations, and yes, even your fancy S3 buckets can vanish because a cloud admin clicked the wrong button (trust me, it happens more often than you\u0026rsquo;d like). Remember, the Cloud is just someone else\u0026rsquo;s computer managed by someone who hasn\u0026rsquo;t had their coffee yet.\nSo today, let’s raise a toast to the unsung hero of the digital world: the quiet one, the awkward one the one that never gets a thank you unless something crashes. And if you don’t have a backup to toast with, spare a thought for those colleagues you really don’t want to organize a farewell party for after the next ransomware attack or the inevitable human error (even though you enabled versioning on S3).\nLet\u0026rsquo;s explore how backups evolved from panic-driven memory dumps to intelligent, queryable snapshots and perhaps you\u0026rsquo;ll finally see backups as your new best friend, not just a grudging necessity.\nThe 1950s to 1970s: Brute-force survival # Welcome to the era of mainframes, magnetic tapes, and IBM engineers who backed up data as if it was a military drill because it practically was.\nNo folders, no files, no undo buttons. Backup meant dumping every single bit of memory or disk onto tapes using tools like dd, tar, or cpio. There was no \u0026ldquo;restore document\u0026rdquo; option your recovery plan was reloading everything, slowly, manually, and with all the grace of a brick falling from the sky.\nIt wasn\u0026rsquo;t elegant, but it worked. Mostly.\nThink of it as photocopying your entire house just to preserve the bathroom.\nThe 1980s and 1990s: Archives you could carry # Then came personal computers and the sweet agony of limited floppy disk space (remember those 1.44MB disks?). Compression became essential, giving us ZIP, RAR, and ARJ. Suddenly backups weren\u0026rsquo;t just about survival; they were portable.\nPeople didn\u0026rsquo;t just back up they zipped, emailed, shared, hid, and labeled archives as \u0026ldquo;homework_final_FINAL.zip.\u0026rdquo; But restoration? Let\u0026rsquo;s be honest it wasn’t a priority. The internet was still dial-up and barely critical, so who cared if restoration was fragile? Your backups fit in your backpack, and that was cool enough.\nBackups felt personal. Restoring? Meh, tomorrow’s problem. Honestly, we spent more time choosing funny filenames than worrying if we could ever open them again.\nLate 1990s to mid-2000s: Incrementals, deduplication, and system clones # As IT expanded, backups matured rapidly. Incremental backups emerged, making nightly backups manageable without drowning in tapes. Hardware deduplication dramatically reduced storage costs, with Data Domain appliances becoming overnight stars, giving IT managers new hope.\nSimultaneously, system cloning revolutionized disaster recovery. Tools like Norton Ghost, Acronis True Image, and Apple\u0026rsquo;s Time Machine enabled quick, bootable restorations. Time Machine even made backups stylish, letting users flip through past states as if browsing Netflix. Finally, backups had better user interfaces than most corporate websites at the time.\nYet, these backups were still tied closely to their environments you couldn\u0026rsquo;t peek inside without restoring first.\n2005 to today: Smarter, safer, and open (thanks, ransomware!) # Because nothing says ‘Happy Monday’ like an encrypted hard drive and a Bitcoin ransom note.\nEnter the open-source backup revolution: Borg, Restic, Duplicity, Tarsnap and Kopia stepped onto the scene. Incremental snapshots became standard, finally burying full backups. Deduplication and encryption weren\u0026rsquo;t optional they became essential, driven largely by escalating ransomware threats.\nLegacy backup solutions scrambled to add encryption at rest and in transit, but bolting on security didn\u0026rsquo;t overcome fundamentally outdated designs. Automation, DevOps pipelines, and cloud environments demanded backups be non-intrusive, frequent, and recoverable at a granular level.\nBackups finally became proactive, strategic, and ransomware-resistant. Yet, most still required a full restore to reveal their contents. They remained boxed-up archives rather than usable data assets.\nUntil now.\nThe 2025 and beyond: Enter the Kloset # Today’s data isn’t neatly packaged in simple files anymore. It\u0026rsquo;s databases, cloud objects, SaaS platforms scattered across hybrid, edge, and distributed infrastructures. You don\u0026rsquo;t just need to store this data; you need to audit, query, reuse, and restore it seamlessly across entirely different environments, something legacy file system based backup approaches simply aren\u0026rsquo;t built for.\nEnter Kloset, the groundbreaking backup engine powering Plakar. Kloset encapsulates each snapshot into a self-contained, immutable, structured data asset. Like a container that packages an app with everything it needs, Kloset packages data with context, structure, metadata, and integrity. You can inspect and query snapshots without restoration, conduct forensic searches, compliance audits, or integrate directly into CI/CD pipelines.\nEffortlessly transition between different environments from databases to filesystems, S3 buckets to local disks, edge to cloud. Kloset transforms backups from mere archives into actionable, portable data assets, designed explicitly for reuse rather than recovery alone.\nAnd yes, it\u0026rsquo;s proudly open source.\nFinal thoughts: What have we learned this World Backup Day? # Backup isn’t about files anymore it’s about managing complex data streams flowing from countless applications, SaaS services, and distributed infrastructures. Your backups must match the complexity and flexibility of modern data ecosystems.\nIt’s time to stop seeing backups as chores or emergency-only solutions. Embrace them. Unlock their potential.\nThis World Backup Day, take a moment to ensure you have at least one regular backup of all your critical data, no matter what backup technology you\u0026rsquo;re using that’s already a great start. If you really want to sleep well at night, keep regular copies in at least three different locations, media, or providers, with at least one offline (Deep Archive or magnetic tapes are your friends).\nAnd if you find that complicated well, yes, it can be. Maybe it’s time to give Plakar a try.\n","date":"31 March 2025","externalUrl":null,"permalink":"/posts/2025-03-31/a-short-history-of-backup/","section":"Plakar Blog","summary":"From magnetic tapes to immutable snapshots, how backups evolved into strategic, queryable, open tools ready for modern data challenges","title":"A Short history of backup","type":"posts"},{"content":"Hello, it\u0026rsquo;s me again !\nWe released our first beta late February but we weren’t just going to sit back and stay idle.\nI\u0026rsquo;m glad to announce that we have just released v1.0.1-beta.13, our latest version of plakar, incorporating several new features and solving some of the bugs reported by early adopters.\n$ go install github.com/PlakarKorp/plakar/cmd/plakar@v1.0.1-beta.13 We hope you will give it a try and give us with feedback !\nWeb UI improvements # Importer selection # So far, we have mainly displayed our ability to backup a local filesystem but plakar can backup other sources of data such as S3 buckets or remote SFTP directories.\nWe have extended the UI to allow filtering the snapshots per source type:\nIt is the first filtering out of a series of criteria matching to ease identifying relevant snapshots.\nContent browsing # We also introduced MIME-based lists allowing to restrict the browsing to specific types of files. It may seem like just a filter on top of results, but it really leverages our snapshots built-in indexes API to provide fast browsing through an overlay tree: the MIME browsing is a custom VFS with directories and files matching a given MIME. We\u0026rsquo;re essentially\u0026hellip; a database now :-)\nWe currently provide a selection menu based on MIME main types, but it could be turned into more specific browsing like restricting to jpg files rather than all images. To see it in action, simply select a main type and see how the list changes to only display relevant items:\nThis will be extended to support new ideas soon and our snapshots support holding many such overlays, so we have room for creating very nice features on top of this mechanism as we move forward.\nTimeline # When browsing a snapshot, each directory page now contains a Timeline widget:\nThis allows you to identify if that directory is also part of other snapshots, and will soon also display if they are identical or not, pinpointing snapshots that have divergences.\nThe timeline does not only work for directory but also for files, so that it is possible to identify which snapshots carry a file and soon which ones have a different version of it.\nConsidering that we are already able to provide diff unified patches between two files of different snapshots, we\u0026rsquo;re likely to provide features to ease reverting from a specific version of a file in case a snapshot shows errors were introduced.\nWe have lots of great ideas to build on top of this feature, so stay tuned for more!\nNew core features # Checkpointing # Our software can now resume a transfer should the connection to a remote repository drop. It implements it through checkpointing of transfers and provides to-the-second resume for a backup originating from the same machine: if you do a TB backup and connection drops after two hours, restarting a backup will skip over all the chunks that have already been transmitted without re-checking existence, leading to a fast resume from where it left.\nThis was introduced very recently and is a crucial component of our fully lock-less design as will be discussed in a future tech article.\nFTP exporter # I feel almost ashamed to put that here, but hey\u0026hellip; it\u0026rsquo;s a new core feature.\nWe already supported creating a backup from a remote FTP server:\n$ plakar backup ftp://ftp.eu.OpenBSD.org/pub/OpenBSD/7.6/arm64 We now can also restore a backup to a remote FTP server:\n$ plakar restore -to ftp://192.168.1.110/pub/OpenBSD/7.6/arm64 1f2a43a1 So, yes, I guess we\u0026rsquo;re back in the 90s\u0026hellip;\nThe good news is that this was unplanned and took only 20 minutes to implement, providing a good skeleton to start from if you want to write your own exporter.\nBetter S3 backend for repository # Plakar supports creating a repository in an S3 bucket, but that support was limited to AWS and Minio instances. That limitation has been lifted and we can now host repositories on a variety of S3 providers.\nSadly, we didn\u0026rsquo;t implement the fix in the S3 exporter yet because\u0026hellip; we forgot.\nThat should be done in next beta allowing us to restore to a variety of S3 providers too.\nOptimizations # We did a first round of optimizations in several places, the performance boost is already very visible, but\u0026hellip; this is the first round of many so expect moooore speed :-)\nB+tree nodes caching # It doesn\u0026rsquo;t show but plakar is actually closer to a database than it seems.\nWe don\u0026rsquo;t just scan and record a list of files, we actually record them in a B+tree data structure that is spread over multiple packfiles, with each node uncompressed and decrypted on the fly as the virtual filesystem is being browsed.\nIn previous betas, the B+tree didn\u0026rsquo;t implement any caching whatsoever and was the main point of contention. With this new beta, it comes with a nodes cache that considerably boosts performances.\nOn my machine, a backup of the Korpus repository (\u0026gt;800k files) was 6x times faster on a first run, and 10x times faster on a second run than before the nodes caching was implemented.\nConsidering our constraint not to rely on in-memory indexes for scalability, these kinds of performances boosts are impressive and bring us closer to other solutions that work fully in-memory.\nOther improvements to this are on their way so this is not the end of it !\nParallelized check and restore # In previous betas, both check and restore subcommands were written to operate sequentially, making them very slow as they would not make use of machine resources to parallelize.\nA first pass was done to parallelize them and, while the approach is still naive with no caching and no shortcuts taken (identical resources are checked twice rather than once), this already produced a x2 times boost on my machine for a Korpus check and restore.\nCheck will be improved by using a cache to avoid re-doing the same work twice, which should be very interesting on a large repository with lot of redundancy.\nRestore is already operating at mostly top speed with the target being the bottleneck, but there are still ideas to go one step further. We\u0026rsquo;ll discuss that when they have been PoC-ed or implemented.\nIn-progress deduplication # We realized that due to our parallelization, chunks could sometimes be written multiple times to a repository because a snapshot would spot them in different files, realize they are not YET in the repository but not that they were already seen in a different file.\nWe introduced a mechanism to track inflight chunk and discard duplicate requests to record them to the repository, saving a bit more disk-space and maintenance deduplication work.\nCDC attacks publication # Colin Percival from the Tarsnap project has co-authored a paper on attacks targeting CDC backup solutions:\nIn this paper, they described Parameter Extraction attacks and Post-Parametrization attacks against CDC backup solutions including Tarsnap, Restic and Borg.\nThe attacker’s general goal is to find out what files the user has uploaded, either now or in the future.\nThe Parameter Extraction attacks targets CDC implementations relying on secret values to setup the chunker: a random irreducible polynomial for Rabin (Restic), a secret base and a prime modulus for Rabin-Karp (Tarsnap) and a random table of numbers for Buzhash (Borg). We rely on FastCDC which uses a Gear table with predetermined public values, parameters are therefore already known and our security model relies on protecting data secrecy after it\u0026rsquo;s been chunked.\nThe Post-Parametrization attacks are more interesting to us as we are in this scenario. If we assume that an attacker knows the parameters of a chunker and can observe storage or traffic, they can obtain some level of information about a backup repository.\nInformation leak # When an adversary knows the chunking parameters, they can simulate the chunking process on known files to generate corresponding lists of chunk sizes. If they also have a method to identify which chunk sizes are present in a repository, they can infer with considerable confidence whether specific files exist.\nFor a simplified example, consider an attacker who chunks the /bin directory, thereby obtaining a list of chunk sizes for each file. If the attacker then monitors network traffic to my repository and observes the following sequence of chunk sizes:\n88320 106844 79623 86182 109251 122564 88059 75087 66823 82371 85138 68684 115906 119036 They could immediately conclude that the repository includes the /bin/bash program, all without needing to analyze the actual content of the chunks.\nAttacks # Basically, there are two places where someone can tap into to deduce chunk sizes: when chunks are stored and when they are retrieved.\nStorage-time # During a backup plakar builds so-called packfiles that bundle several chunks together until they reach a maximum size. Several packfiles are created in parallel, chunks from a same file are distributed across them so they are interlaced, and they are encrypted with no delimited between them as to not provide any indication of how many there are, where they begin or where they end.\nA server operator looking at a packfile will only see a string of seemingly random bytes, whereas an attacker monitoring traffic will only be able to obtain the size of a packfile itself which is fairly normalised.\nRetrieval-time # During retrieval, two things happen: chunk existence test and chunk fetch.\nThe chunk existence test is done by fetching an encrypted index of chunks, synchronizing to a local cache and resolving locally without querying the repository: it is possible to validate that a repository holds the entire data set without emitting any chunk requests to the repository.\nA server operator looking at indexes will only see a string of seemingly random bytes, whereas an attacker will only be able to obtain the size of the encrypted index which does not provide much valuable information.\nThe chunk fetch is where we potentially leak information.\nMitigations # On Friday, upon reading the paper, I came up with an immediate mitigation mechanism that I called \u0026ldquo;read and discard\u0026rdquo;.\nPut very simply, whenever fetching a chunk it added a random overhead so that the fetch would request more than the actual chunk size, and the extra bytes would be discarded client side. Because the overhead is random, a chunk that is accessed multiple times will result in different sizes for an attacker observing traffic, not only obfuscating the actual size but also the fact that it is the same chunk that was requested.\nSunday, I realized that Tarsnap implemented the Padmé scheme which is doing something similar to mine but\u0026hellip; WAY better in terms of not wasting too much data on overhead. I looked into it, adapted my read and discard to use the same approach.\nBut then decided to push it a bit further.\nInstead of applying the overhead as a padding and reading extra bytes from a base offset, I used it to create a window containing the entire chunk and used a random left shift to change the base offset. As long as the shift does not exceed the overhead, the window is guaranteed to contain the full chunk. This will produce the same size as a Padmé scheme for an attacker observing traffic, but a server operator with access to the Range request will see different offsets at each request targeting the same chunks. Of course given enough traffic they could defeat this through a statistical analysis, but this makes it harder than just having the fixed base offset and it comes for free.\nHere\u0026rsquo;s an example of it applied to a file access:\n$ ./plakar cat 2f:/private/etc/passwd \u0026gt;/dev/null off=126916 (-7) length=1085 (+17) off=116762 (-1) length=91 (+4) off=235207 (-1) length=87 (+0) off=92866 (-0) length=86 (+0) off=126413 (-2) length=523 (+13) off=93674 (-45) length=2157 (+4) off=88353 (-5) length=271 (+5) off=69422 (-1) length=187 (+0) off=80981 (-24) length=4790 (+9) This was committed today and users of our new beta will benefit from it without even noticing !\nFinal words # As we are heading to our first stable release in a few weeks, we are working hard on squashing last bugs, polishing our tool, as well as implementing some features that we consider essential for our first release.\nFeel free to hop on our discord (we\u0026rsquo;re friendly), talk to us, test and report bugs. All help is appreciated\u0026hellip;\nAlso, there\u0026rsquo;s a bunch of social media share buttons below, just saying !\n","date":"19 March 2025","externalUrl":null,"permalink":"/posts/2025-03-19/plakar-1.0.1-beta.13-out/","section":"Plakar Blog","summary":"New UI filters, timeline navigation, S3 and FTP support, checkpointing, and performance boosts in Plakar’s latest beta release","title":"Plakar 1.0.1-beta.13 out!","type":"posts"},{"content":"We were thrilled to release our first beta in late February, but we weren’t just going to sit back and rest on our laurels.\nInstead, we immediately set our sights on refining our product, incorporating valuable user feedback, and developing innovative features that would take our software to the next level.\nNew beta available # We are excited to announce our new beta release and warmly invite you to test it!\n$ go install github.com/PlakarKorp/plakar/cmd/plakar@v1.0.0-beta.4 During our pre-beta phase, we concentrated on building the core of Plakar—ensuring reliable data storage and reconstruction. Now that these critical components are in place, we can shift our focus to enhancing the user experience with intuitive features that simplify your work.\nSince these read-only features are low-risk, we plan to roll out frequent updates with new functionalities just appearing release after release. This is the perfect opportunity for you to test them and help us shape the tool that best meets your needs.\nPreviews # Our web UI provided previews of text content with syntax highlighting and images:\nText Image In addition to this, latest beta added preview for PDF, audio and video content:\nVideo PDF This works with either a local repository or a remote one, regardless of whether it\u0026rsquo;s encrypted or not. You can easily preview data from a backup hosted on a remote, encrypted repository using a Plakar UI launched locally. The local UI fetches data in real time as it’s read and decrypts it on the fly, so there\u0026rsquo;s no need to download the entire backup or stream any bytes in cleartext.\nThis is the first in a series of new features designed to make the user interface more efficient at helping you find exactly what you need. Without giving too much away, additional exciting features are set to roll out in the coming days!\nTest improvements # Tests are crucial, and I firmly believe that developers should avoid writing tests for their own code. If I have flawed logic in mind and write both the code and its tests, we end up verifying that my bug is correctly implemented—which isn’t our goal 😄.\nPeer review is invaluable—especially given Plakar Korp’s requirement for two reviewers and our extensive experience with it. However, as a small team that frequently collaborates on the same code, we risk developing shared assumptions that might lead us all to the same flawed logic. I also prefer that our developers focus on their strengths, particularly R\u0026amp;D problem-solving, since that is an area where external help is harder to come by than testing.\nIn January, we had tests for the most critical parts of Plakar that affected storage format and could potentially lead to data corruption. However, we wanted broader coverage that also included the non-critical components. We decided to bring on an extra resource focused solely on testing, and @sayoun expressed interest, so he began working with us.\nSince then, he has diligently reviewed every piece of code we write, adding the missing tests for each package and command. This approach allows the team to concentrate on improving the software while @sayoun brings a fresh perspective to testing without being too involved in other discussions. We’re thrilled to see new tests emerging, catching bugs, and enhancing overall quality.\nMost recently, his efforts have focused on testing the CLI subcommands—the most user-visible part of Plakar—and he has already helped fix several command-line issues.\nDocumentation improvements # This effort was spearheaded by @omar-polo, who took on the heavy lifting of restructuring our documentation—converting all previous materials into a unified format and meticulously addressing every detail. The entire team also contributed by polishing the content, adding more examples, and fixing typos, with significant input from @semarie, a familiar face from the OpenBSD project that we were delighted to see around.\nAs a result, all our documentation in mandoc format now adheres to a consistent structure, enabling us to effortlessly automate the generation of Markdown versions. These versions are seamlessly displayed to users through plakar help and synchronized on our documentation site.\nOptimizations # We implemented several optimization improvements—nothing too fancy, but small wins are still wins!\nAmong these, two stand out:\n@mathieu-plak discovered and fixed an issue with our use of the binary package, which had been causing poor performance in packfile deserialization. Although the performance boost is significant, packfile deserialization is rarely used, so the improvement is only noticeable in specific scenarios. Nevertheless, this fix prompted us to review similar constructs to ensure we hadn\u0026rsquo;t made the same mistake elsewhere.\n@omar-polo enhanced certain lookups by employing an optimized scan of our B+ tree rather than a node traversal in some cases. This change significantly boosts performance when scanning all entries of a backup.\nAs a side note, we\u0026rsquo;ve begun working on deeper optimizations that are expected to deliver even bigger gains. Stay tuned for an upcoming article detailing these enhancements.\nBugfixes # Based on user feedback—especially from @b1pb1p, @ncartron, @ajacoutot, and @semarie—we have resolved minor bugs that affected SFTP support and the agent when using specific command options.\nPackaging # In January, @ajacoutot emailed me to let me know he had packaged Plakar for the OpenBSD project. At the time, we were finalizing the storage format before freezing it, so I asked him to hold off to avoid users having to trash their data when the beta was released. Now that the beta is out and the storage format is stable, he updated his port and committed it to the OpenBSD project—OpenBSD users can simply run pkg_add plakar to get started!\nOn the same day, @lbartoletti informed me on IRC that he had packaged Plakar for the FreeBSD project, meaning FreeBSD users will soon be able to run pkg add plakar.\n","date":"8 March 2025","externalUrl":null,"permalink":"/posts/2025-03-08/plakar-beta.4-and-upcoming-features/","section":"Plakar Blog","summary":"New UI previews, performance boosts, SFTP fixes, CLI testing, packaging on BSDs—beta.4 refines Plakar while previewing what’s next","title":"Plakar beta.4 and upcoming features","type":"posts"},{"content":"Before releasing a usable version, we wanted an expert to examine our cryptographic design and confirm we hadn’t made any regrettable choices. We were delighted to have Jean-Philippe Aumasson take care of the review—a true privilege given the high level of confidence we have in his skills: he is a recognized cryptographer who created various widely-used algorithms, including ones used in plakar, and who authored great books on the topic, two of which are on my desk as I write this.\nBelow is the unedited review after of our original submission, followed by the unedited remediation review after our corrective steps. Comments are inlined in to provide clarifications where needed.\nInitial review # Summary # Plakar is a data backup solution featuring client-side encryption, and a server-side deduplication mechanism.\nThe goal of this audit is to review:\nthe soundness of the cryptography architecture the reliability of the algorithms and protocols chosen the security of the implementation the correctness of the documentation Resources provided by Plakar include:\nthe code in encryption/symmetric.go documentation in CRYPTOGRAPHY.md and README.md Our general assessment is that the current design is cryptographically sound in terms of components choice and parameters. However, we propose a number of improvements to reduce security risks, improve performance, and rely on more state-of-the-art components.\nThe 3 sections below describes\nour observations on the design our observations on the and code our review of the changes after sharing 1. and 2. Our observations don\u0026rsquo;t include any major security issue, but instead recommendations in terms of robustness and performance. The review of the changes validated the approach chosen, the choice of algorithms, and their parameters.\nDesign # Password hashing: Scrypt vs. Argon2id # We recommend switching to Argon2id for password hashing.\nCurrently password hashing is done with scrypt, with the following parameters:\nKDFParams: KDFParams{ N: 1 \u0026lt;\u0026lt; 15, R: 8, P: 1, KeyLen: 32, scrypt was developed in 2009 as one of the first memory-hard password-based hashing schemes with tunable memory. However, Argon2id was developed through the Password Hashing Competition to address some of its shortcomings, and is now recommended by modern security guidelines (such as OWASP and NIST).\nArgon2id is defined in RFC 9106. Compared to scrypt, it has\nBetter resistance to side-channel attacks More intuitive parameterization A simpler internal logic (instead of scrypt\u0026rsquo;s requirements for PBKDF2, SHA-2, ChaCha, etc.) A Go implementation of Argon2id is available in the x/crypto package.\nWe recommend parameters t = 4 and m = 256MB, for a 256 megabyte usage. If t = 4 makes hashing too slow, then use t = 3.\nnote from the developers:\nThe KDF API was refactored so that it can use Argon2Id by default and alter its parameters or switch to a different KDF should it be required.\nChunk encryption: AES-GCM vs. AES-GCM-SIV # We recommend switching to AES-GCM-SIV for chunk encryption.\nAES-GCM-SIV is a mode defined in RFC 8452 that does not rely on randomness. It produces the nonce by computing a PRF over the message to encrypt. It implies that encrypting the same message twice will produce the same ciphertext. However, if each subkey encrypts a single chunk, this is not an issue.\nAES-GCM-SIV also prevents streaming of the data hashed, therefore the whole chunk has to be stored in memory. Since data is already chunked to be streamed, and chunks are of fixed, small size (64 KB), this is not an issue.\nThat said, AES-GCM-SIV has less adoption than AES-GCM, and is not as standardized as AES-GCM. AES-GCM is fine security-wise, switching to the SIV mode would just eliminate one risk related to randomness. Depending on the business requirements and client needs, AES-GCM may be preferable (for example, if a FIPS standard is needed).\nnote from the developers:\nThe encryption API was refactored so that it can use AES-GCM-SIV by default and switch to AES-GCM should it be required.\nThe most reliable implementation of AES-GCM-SIV is in Google\u0026rsquo;s Tink package.\nNote that the Go language maintainers are planning to add AES-GCM-SIV to x/crypto.\nSubkey encryption: AES-GCM vs. AES-KW # We recommend switching to AES-KW for subkey encryption.\nCurrently subkeys are encrypted with AES-GCM. However, there is a dedicated construction for the specific problem of encrypting symmetric keys (as short, fixed size, high entropy values), namely key wrapping.\nSwitching to AES-KW would eliminate the need of repeated nonces when encrypting a large number of subkeys with the same key. Nonce being 12-byte, or 96-bit, a collision of nonces is expected after approximately 248=281,474,976,710,656 subkeys. That\u0026rsquo;s a lot of subkeys (8 petabytes worth of 32-byte keys), but at scale and over a key\u0026rsquo;s lifetime the risk may become non-negligible.\nAES-KW is defined in RFC 3394, and is standardized in NIST SP 800 38F.\nTo integrate AES-KW, we recommend the package go-aes-key-wrap.\nChecksums potential information leak # The specification writes that \u0026ldquo;Each time a chunk is produced, a checksum of the data is computed for internal purposes and recording within the snapshot itself.\u0026rdquo;\nThe checksum is a SHA-256 hash of the cleartext data. A MAC of the checksum is then used as blob ID, although the checksum seems to be used as an index.\nOur main observation is that the knowledge of a checksum (as hash of cleartext data) can allow an attacker to identify if a given piece of cleartext data is stored. Depending on the threat model, this may or may not be an issue.\nnote from the developers:\nWe forgot to explain that digests were not visible within a backup repository as they were only part of encrypted indexes. They were supposedly only available locally to the software after it had fetched and decrypted the repository state. Regardless, we figured a way to rework our lookups and adapted the codebase to work fully on MAC and no longer make use of digests, removing all potential concerns over digests.\nFurthermore, we suggest to adjust terminology to avoid misunderstandings and use the most accurate term:\n\u0026ldquo;Checksums\u0026rdquo; are generally defined as non-secure hash values designed to detect accidental errors (such as CRCs). In contrast, \u0026ldquo;hash values\u0026rdquo; or \u0026ldquo;digests\u0026rdquo;, or \u0026ldquo;fingerprints\u0026rdquo; are generally created using cryptographic hash functoins, secure against adversarial changes.\nThe function ChecksumHMAC() data is used to produce an objects.Checksum data. Here, we suggest to replated ChecksumHMAC with (for example) MAC() or ComputeMAC, as 1) HMAC is just a type of MAC (or PRF), like AES is a type of block cipher, and 2) the value computed is not a checksum, but also called MAC (message authentication code).\nCode # The proposed implementation in symmetric.go and hashing.go use reliable, Go-native implementations of cryptographic components. It uses them in a safe way, for example using strong randomness, properly initializing a nonce/IV, and so on.\nWe just have a minor observation:\nPotential deadlock # If the reader passed to DecryptStream() does not provide full chunks of data, the read operations in the goroutine could stall indefinitely. Unless the risk is really negligible, we recommend implementing a timeout to prevent denials of service.\nnote from the developers:\nThis comment prompted a review, our assessment is that our implementation will raise an error and cause DecryptStream() to fail on incomplete chunks of data.\nRemediation review # After discussion with the Plakar maintainers, we reviewed the changes performed in the documentation and code to address our recommendations, namely the following schemes as new defaults:\nUse of BLAKE3 for hashing and MAC Use of AES-GCM-SIV for chunk encryption Use of AES-KW for subkey encryption Updated doc: CRYPTOGRAPHY.md\nIn CRYPTOGRAPHY.md#current-defaults, nit:\nKEYED BLAKE3 -\u0026gt; Keyed BLAKE3 ARGON2ID -\u0026gt; Argon2id (Also in the code, s/Argon2ID/Argon2id)\nUpdated symmetric.go: /encryption/symmetric.go\nNo problem found.\nSwitch to ARGON2ID:\nPR #447: replaces the default KDF and allows plugging SCRYPT or PBKDF2 if required (not exposed)\nThe Argon2id parameters seem to be 256KB only, is that intended? I\u0026rsquo;d recommend 256MB or more.\nArgon2IDParams: \u0026amp;Argon2IDParams{ SaltSize: saltSize, Time: 4, Memory: 256 * 1024, Threads: uint8(runtime.NumCPU()), KeyLen: 32, }, note from the developers:\nConfusingly, the size is not expressed in bytes but in kilo-bytes as confirmed by the documentation at golang.org/x/crypto/argon2: \u0026ldquo;The time parameter specifies the number of passes over the memory and the memory parameter specifies the size of the memory in KiB. For example memory=64*1024 sets the memory cost to ~64 MB.\u0026rdquo;\nThe Threads parameter was also lowered to 1 in a following commit after approval by the auditor.\nSwitch to BLAKE3:\nPR #448: Allows using BLAKE3 as a hasher for our HMAC function, we switched to BLAKE3 by default instead of SHA256 in a separate commit. PR #457 described below effectively unplugs all digests to only compute HMAC.\nOK (where the B3 HMAC is replaced by keyed B3 in another PR)\nSwitch to AES256-KW:\nPR #455: split data encryption and subkey encryption, allow using AES256-KW\nOn the verification canary: AES-KW includes an integrity check, to ensure that the unwrapped key (a subkey decrypted using the passphrase-derived key) is correct. However keeping the passphrase verification canary is fine, and needed when AES-KW is not the subkey encryption scheme used.\nSwitch to AES256-GCM-SIV:\nPR #465: switch to AES256-GCM-SIV #465\nLooks good, no problem found, OK with the tink package usage.\nSwitch from digests to MAC\nPR #457: kill checksums use hmac only. No more calls to Checksum(), function was removed and we now only rely on ComputeMAC(). The command plakar digest allows to compute a digest instead of a MAC if needed, it no longer resort to digests recorded in the snapshot.\nPR #469: the type objects.Checksum was renamed to objects.MAC, and mechanical change to rename all types and variables for consistency.\nLooks good, no problem found.\nSwitch from HMAC-BLAKE3 to Keyed BLAKE3\nPR #484: Switch from HMAC-BLAKE3 to Keyed BLAKE3\nLooks good, no problem found.\n","date":"28 February 2025","externalUrl":null,"permalink":"/posts/2025-02-28/audit-of-plakar-cryptography/","section":"Plakar Blog","summary":"Independent cryptographic audit of Plakar by Jean-Philippe Aumasson confirming a sound overall design, with recommendations to modernize components. Key improvements include switching to Argon2id for password hashing, AES-GCM-SIV for chunk encryption, AES-KW for key wrapping, and replacing SHA-256 digests with keyed BLAKE3 MACs. No major security issues were found, and follow-up review validated the updated architecture, algorithms, and implementation.","title":"Audit of Plakar cryptography","type":"posts"},{"content":"Listen to this article as an AI-generated podcast as you read!\nYour browser does not support the audio element. Hello!\nThe past few months have been incredibly intense as we launched Plakar Korp to support the development of plakar and other related software.\nIn just one quarter, I transitioned from working solo to collaborating with a highly talented team, all of whom I had worked with before in various contexts and knew would be a perfect fit. Not only did they catch up remarkably fast on the existing codebase, but each of them also introduced very significant improvements, which is simply astounding to me!\nNow that I can delegate even the most intricate code to trusted people, I figured it was the perfect time to step back from my code editor and write my very first blog post:\nI’m thrilled to announce that our first beta release is now available for general testing, showcasing our current state of work.\nWhat is Plakar and why are we doing it? # plakar is a free and open-source software for creating distributed and versioned backups of various data sources. It features data deduplication, compression, and encryption. Written in Golang, Plakar is developed primarily on macOS, Linux, and OpenBSD. It is designed to work on most modern Unix-like systems, with a Windows port currently in progress and set to launch soon.\nBorn out of dissatisfaction with both open-source and commercial alternatives, Plakar was built with one clear goal in mind: to provide the most advanced backup features in the easiest form possible. With no need for hacks or custom scripts, there’s no reason to procrastinate and risk data loss.\nIt can be installed and configured in seconds: for most users, creating a backup is as simple as typing plakar backup, and restoring is as straightforward as typing plakar restore. There’s simply no reason to defer setting up backups to the next day!\nWhat to expect from this beta # Beta software can be worrisome, so why would you want to try it on a backup software?\nFirst of all, the storage format stems from years of evolutionary development. It underwent months of stabilization and stress testing, and it provides various sanity checks to ensure data integrity. While you may encounter glitches in the CLI or the web UI, which are still fairly recent, the data storage itself is nowhere near beta. Your backed-up data is stored safely, and data corruptions will not go undetected.\nSecondly, you can test our beta while retaining your existing solution, and then decide if it’s worth switching when our stable release lands. You’ll be able to evaluate usability, storage efficiency, and see how it improves your workflows and resources usage.\nFinally, by testing the beta, you’ll be able to identify commands that need improvement or are missing for your use cases. This feedback helps ensure that most common use cases are fully supported when the release lands!\nThe most likely scenario is that you’ll encounter strange logs, typos in error messages, or command options that do not work as you\u0026rsquo;d intuitively expect in some cases. We will fix these issues as quickly as possible.\nState-of-the-art deduplication # To minimize data loss during an incident, it’s crucial to perform backups frequently so that the gap between the last backup and the incident is as short as possible.\nHowever, backups involve reading data, processing it, and transmitting/writing it to storage, so the frequency of backups is limited by available resources. For example, if a backup operation takes over an hour, you can’t realistically schedule hourly backups. Similarly, if you have 1TB of storage for backing up 100GB of data, the number of backups you can store depends on how efficiently each backup uses space—essentially, on how well it avoids storing redundant data.\nInefficient deduplication can lead to data being read, transmitted, and stored more times than necessary. This not only slows down the backup process but also consumes additional bandwidth and storage space, driving up overall costs. The problem is compounded in a 3-2-1 strategy, where multiple copies across different sites can significantly amplify these inefficiencies.\nHistorically, backup systems relied on making full copies of the original data. Over time, incremental backups were introduced to store only newly created or modified files. Later, the approach evolved into fixed-sized chunking, which enabled the transmission of just the modified parts of a file—provided its structure remained unchanged. Today, the most advanced method is content-defined chunking, which intelligently divides files into chunks and adapts to shifts caused by data insertions or deletions, ensuring that only the smallest possible delta is transmitted.\nPlakar builds upon our go-cdc-chunkers package, which implements state-of-the-art content-defined chunking algorithms. In our benchmarks with similar random data, our implementations not only outperformed all others by a fair margin but also provided an excellent distribution of chunk sizes:\nBenchmark_Restic_Rabin_Next-8 613.78 MB/s 1301 chunks Benchmark_Askeladdk_FastCDC_Copy-8 2091.00 MB/s 105327 chunks Benchmark_Jotfs_FastCDC_Next-8 2473.86 MB/s 1725 chunks Benchmark_Tigerwill90_FastCDC_Split-8 3112.39 MB/s 2013 chunks Benchmark_Mhofmann_FastCDC_Next-8 2078.19 MB/s 1718 chunks Benchmark_PlakarKorp_FastCDC_Copy-8 7733.47 MB/s 3647 chunks Benchmark_PlakarKorp_FastCDC_Split-8 8142.45 MB/s 3647 chunks Benchmark_PlakarKorp_FastCDC_Next-8 8149.54 MB/s 3647 chunks Benchmark_PlakarKorp_JC_Copy-8 13431.34 MB/s 4033 chunks Benchmark_PlakarKorp_JC_Split-8 13734.42 MB/s 4033 chunks Benchmark_PlakarKorp_JC_Next-8 13739.79 MB/s 4033 chunks Github user @glycerine performed independent benchmarks, testing other algorithms such as Google\u0026rsquo;s FastCDC implementation for Stadia, which we also outperformed.\nWe will continue to track advances in the field to implement state-of-the-art algorithms and provide our users with the best deduplication they can expect.\nState-of-the-art encryption # Backups often reside in cloud services or offsite storage, which might be targeted by hackers or even vulnerable to insider threats. Attackers might attempt to steal data and silently modify or delete backups without detection.\nTo limit these risks, the only solution is to rely on end-to-end encryption (E2EE) and message authentication codes (MAC) to provide privacy, authenticity, and integrity guarantees.\nE2EE encrypts data locally before it leaves your device, ensuring that only you can decrypt and read it — even your storage provider cannot access the information. This protects your data from unauthorized access throughout its lifecycle, from creation to retrieval, even if the storage system is compromised or physical disks are stolen.\nMAC enable detection of any unauthorized modifications to the data, ensuring that any attempt to alter or tamper with your backups is promptly identified.\nSecurity is a process, not a product.\n– Bruce Schneier\nCryptography was not built as an add-on feature layered on top of plakar but is an integral part of its design.\nIt relies heavily on MAC to authenticate any information stored as part of a backup, performs encryption during backup and decryption during restore without ever sharing the secret with the storage layer. It effectively performs end-to-end encryption, allowing the hosting of repositories at public cloud providers: a backup repository does not leak information regarding its content, and any tampering is detected and reported during backup checks or restore.\nIn the sample below, I blindly modified a random byte in the backup repository which tampered with a random file, and performed a check on the backup:\n$ plakar check ec6f019c repository passphrase: ec6f019c: ✓ /private/etc/afpovertcp.cfg ec6f019c: ✓ /private/etc/apache2/extra/httpd-autoindex.conf ec6f019c: ✓ /private/etc/apache2/extra/httpd-dav.conf ec6f019c: ✓ /private/etc/apache2/extra/httpd-default.conf ec6f019c: ✓ /private/etc/apache2/extra/httpd-info.conf [...] ec6f019c: ✘ 43650d9f7...: corrupted object ec6f019c: ✘ /private/etc/openldap/schema/java.schema: corrupted file [...] ec6f019c: ✓ /private/etc/zshrc ec6f019c: ✓ /private/etc/zshrc_Apple_Terminal ec6f019c: ✓ /private/etc ec6f019c: ✓ /private ec6f019c: ✓ / Despite having a good internal understanding of what we’re doing, we decided to contract a cryptographer to perform an independent review and provide suggestions for improvements. The audit confirmed that our design is sound and provided suggestions, all of which were implemented, to strengthen our approach further and make plakar future-proof from a cryptographic standpoint.\nWe will commit to relying on independent reviews from cryptography experts and follow their guidance whenever working on cryptography-related topics, including reassessing previous decisions at regular intervals to ensure we remain ahead of evolving attacks.\nA few words on performances # When evaluating the performance of backup software, we need to consider multiple dimensions that together provide a proper balance between scalability, resource utilization, and speed.\nOur challenge is to identify the optimal tradeoffs so that you achieve maximum scalability and speed while using minimal resources.\nScalability # As data grows, both in terms of size and number of objects, challenges arise: how does a solution cope with millions of objects, or even, how does it handle millions of objects in a single directory?\nWhen I first discussed this with @misterflop, he installed a widely-used commercial software to test these worst-case scenarios, and it crashed on him at the first try.\nThe same test on plakar succeeded but back then, as was common with other open-source solutions, it used in-memory structures that made it super fast—but also caused it to consume huge amounts of memory in such cases, exposing it to OOM kills. After weeks of byte-level optimizing it became clear that none of the few bytes saved here and there would enable scaling significantly.\nWe decided to implement two major changes: first, to start relying on disk offloading during backups to avoid hogging all memory; and second, to structure snapshots as a B+ tree, which allows us to spread nodes across our storage and load them on demand rather than forcing entire indexes to fit in memory.\nThe two changes required considerable work but eventually paid off and while offloading to disk cost us roughly a 30% penalty in backup speed, plakar can now scale to very large backups without requiring insane amounts of memory. We can now put some focus on optimizing speed knowing that scalability is a solved issue!\nDeduplication efficiency # While it is difficult to produce universal metrics—since efficiency depends on the type of data being backed up and its variability over time—we can affirm that plakar is highly efficient in deduplication.\nBy combining efficient compression with state-of-the-art content-defined chunking deduplication, the first backup is generally slightly smaller than the original data, and subsequent backups are considerably smaller as they essentially consist of the delta.\nIn the example below, I backed up our korpus folder ten times producing snapshots of 33GB each. The repository holds 327GB of cumulative data, however the actual repository size is only 28G which is smaller than even a single snapshot by over 15%.\n$ plakar info |grep ^Size repository passphrase: Size: 327 GB (326968934310 bytes) $ du -sh ~/.plakar 28G /Users/gilles/.plakar A 100GB directory can be backed up dozens of times in a day while the repository grows by only a few MB if changes are limited.\nThanks to this robust deduplication, use cases where frequent backups were previously unrealistic due to wasted space—or where offsite backups were prohibitively expensive because of the storage required for multiple copies—are now viable again.\nSpeed # While our initial focus was on ensuring every backup is robust and error-free, early benchmarks indicate that our solution performs fairly well even with these priorities\u0026hellip; but we can do better because fairly well is not enough:\nOur goal is to offer the best of both worlds—robust data integrity and exceptional speed.\nThe road ahead # We plan to streamline backup and restore processes by refining our algorithms, leveraging more parallel processing, and reducing any unnecessary overhead and locking. We also plan to improve caching mechanisms and fine-tuning resource allocation, aiming at a boost of performance while keeping resources consumption minimal.\nThese are all fancy words to say: we plan for faster backups and recoveries.\nThe foundations we’ve laid for correctness and scalability create a strong baseline upon which we can continuously optimize performance. As we incorporate these enhancements and you update to newer versions, you can expect even shorter backup and recovery windows.\nThis is not just claims: in prototypes with basic parallelization optimizations, integrity checks of backups obtain up to a x10 boost and restore obtains up to a x4 boost\u0026hellip; and these are with naive optimizations.\nA few words on reliability # It is impossible to guarantee that backups are never corrupted, because storage failures and bugs are bound to happen. However, we should make sure that backups do not produce corrupted data, and that data corruption happening in the storage do not go undetected.\nDeferred garbage collection # plakar operations are non-destructive by design: clients push new states that are aggregated to previous states, even the deletion of a snapshot is technically the addition of a deletion event.\nA maintenance job can be scheduled at frequent intervals to reclaim storage space by removing resources that are no longer referenced by a snapshot. Lots of effort has been poured into making this process lock-less and allow maintenance to happen in parallel to backups in progress. However, because this is the only destructive operation in a backup repository, we decided to take two safety net measures during the beta phase:\nmaintenance locking: despite the maintenance being lock-less, we are temporarily resorting to a maintenance lock preventing maintenance to happen in parallel to backups in progress. The goal is to prevent users from having to deal with corner-cases that we\u0026rsquo;re unaware of and that we\u0026rsquo;ll try to provoke through stress-testing. When we\u0026rsquo;re 100% confident, the locking can transparently go away in a new version.\ndeferred garbage collection: orphaned resources are marked as deleted but are only deleted after a grace period. During that grace period, plakar will fail lookups as if the resources had been removed for real and display an error message\u0026hellip; while keeping them at hands should a bug have creeped in. If the error message is never encountered in the next few weeks or months, this deferred garbage collection can transparently go away in a new version.\nTest Coverage # Because recent plakar development happened at a fast pace, we prioritized writing tests for the most critical components that could lead to data corruption. All of the lower layers, from storage to encryption, have unit tests integrated into our CI, which prevents code merges if a test fails.\nWork is in progress to continuously improve testing even for the upper layers, such as the CLI and subcommands, even though these components carry minimal risk of causing data corruption.\nThe plakar Korpus # We compiled a corpus of millions of objects, including text, code, binary objects, and images coming from popular code repositories. plakar is tested by performing backups of this corpus, then running integrity checks and restores, none of which are supposed to ever fail regardless of how many times they are ran.\nWhile this corpus is representative of the wide variety of data people tend to back up, it is a worst-case scenario since it contains a LOT of heterogeneous data in a single backup, making it very likely to be worse than your typical use-cases.\nIntegrity Validation # When creating backups, plakar computes a cryptographic MAC for every chunk of data as well as for entire objects. These MACs are recorded in the snapshot and used as lookup keys in the backup repository.\nThis mechanism allows it to easily validate that the stored data has not been corrupted by fetching the data and recomputing the MAC to compare it with the recorded value. This process is used during a restore, as each file has its chunks recomputed to ensure they match the records when writing back the files to a target.\nAdditionally, plakar provides a check mechanism to perform these operations without executing a full restore, allowing, for example, a laptop with 256GB of disk space to verify the integrity of a 1TB backup.\nStructures Versioning # plakar incorporates versioning for all its internal structures that interface with storage.\nThis approach ensures that sanity checks can be performed, prevents older versions from manipulating data created by newer versions, and allows new versions to reload data created by older ones.\nMore generally, it enables deduplication across different versions in use without risking corruption from misinterpreting the stored structure format.\nNow about features in this beta! # The beta comes after years of late-night development and a quarter of intense full-time teamwork.\nIt is difficult to list exhaustively all the features it brings, so let’s focus on the most notable ones and let you discover everything it has to offer by testing it!\nExtensive Built-in Documentation # Following the Unix tradition of providing manual pages for all commands, plakar offers manual pages for each of its subcommands, detailing their options and usage examples.\nThese manuals are available online and can also be accessed offline directly from the tool, ensuring you have the necessary information during an incident when internet access may be unavailable. The online documentation is synchronized with the tool\u0026rsquo;s documentation to guarantee that they are always identical and that any fixes are updated everywhere.\nThere are few things as frustrating as inaccurate or missing documentation in the middle of a stressful incident—especially when it involves potential data loss. We consider such documentation issues as critical bugs that must be fixed with the same urgency as any software defect.\nA unix-friendly CLI # Plakar has no learning curve: it mimics existing Unix-like commands to feel natural.\nYou’ll be able to run commands like plakar backup, plakar restore, plakar cat, plakar ls, plakar diff, plakar locate, or even plakar rm, so that in the event of an incident requiring fast actions you don’t need to re-discover the command line of an unfamiliar tool.\nOnce the backup repository is setup, manipulating backups becomes as natural as an everyday task.\nA user-friendly web UI # It comes with a user-friendly web UI that lets you browse, preview, and download content.\nLight mode Dark mode Over time, the web UI will progressively support all the features available in the CLI, giving users the flexibility to work in either the console or in a browser.\nMulti-backend storage layer # Plakar supports storing backups using a variety of backends.\nIt can store backups on a local or mounted filesystem, on a remote filesystem via SFTP, in an S3 bucket powered by MinIO, Vultr, Scaleway, or AWS, and even offers experimental support for databases with SQLite.\nWe will continue implementing new backends to expand the variety of storage solutions available to plakar users.\nMulti-backend importer and exporter layer # Unlike many other solutions, plakar does not focus on a single type of data source.\nIt provides an API to implement data importers and exporters, enabling it to back up data from remote sources—such as an S3 bucket—and restore them to a remote target, like a different S3 bucket, or a local directory as importers and exporters are not tied one to another and allow mixing.\nJust as with storage backends, new ones will be implemented, allowing plakar to back up more than just local filesystems while retaining the same intuitive feel and benefit from the same level of deduplication and encryption.\nCross-site synchronization # Backup repositories can be synchronized with each other in either direction.\nThe synchronization mechanism is designed to be both flexible and secure, allowing administrators to configure bidirectional replication that maintains consistent data across multiple sites. Whether you need to mirror backups for disaster recovery or adhere to regulatory constraints that dictate specific data flow directions, plakar adapts to your requirements.\nThe system optimizes data transfers by propagating only incremental changes, ensuring efficient use of bandwidth while keeping repositories in sync.\nAgent mode # It also comes with an experimental agent mode, which allows basic orchestration and scheduling of tasks in a simple infrastructure.\nThe agent mode can be used to configure specific tasks and ensure they run at given intervals, removing the need for scripting tools to control plakar.\nWant to give it a try ? # You can install and test plakar right away following these simple two steps:\nRead the simple quickstart guide that will hold your hand and help you get started that\u0026rsquo;s all actually, no need for more :-) Want to help us ? # The best way to help us is to test plakar, report any issues you encounter so that we can improve and polish the software before the stable release, and contribute to both the documentation and code if that\u0026rsquo;s within your skillset. By testing plakar, you play a crucial role in enhancing its stability and usability, as each bug report, suggestion, or enhancement helps us refine the product and better meet the needs of our community.\nThe next best way to support us is to spread the word and share this post with your friends. Word of mouth is essential for us at this point to gain traction and popularity, as every recommendation helps build a community of engaged users invested in the project\u0026rsquo;s success.\nFinally, feel free to join our Discord server, where development takes place almost transparently every weekday (and sometimes in the evenings for night owls). There, you can chat with our community, ask both general and technical questions, and observe discussions among developers in our virtual hackrooms. You might even catch parts of our technical meetings in public vocal rooms, providing you with unique insights into our development process.\nTogether, these actions—testing, sharing, and engaging—are the pillars that help plakar evolve into a robust and user-friendly tool for everyone.\nWhat\u0026rsquo;s coming next ? # Bug fixing # We intend to squash all \u0026ldquo;blocker\u0026rdquo; bugs reported to us in preparation for an upcoming Release Candidate version. This Release Candidate will pave the way for our first stable release.\nOptimizations # First of all, we have several parallelization optimizations that we did not include initially because we focused on correctness over raw performance. Our next phase is to start parallelizing commands that are currently running sequentially.\nIn addition, we have identified several areas that require in-depth optimization, such as refining the unlocking process of our B+ tree and better caching.\nAlerting, monitoring and dashboards # We want to add support for a few features for registered users, such as availability of analytic dashboards, monitoring of backups and alerting should backups, check or synchronization of repositories do not happen at the expected intervals or fail for any reason. We are still assessing the best way to provide these features while retaining the expected privacy.\nAmazon S3 Glacier # We also want to add support for Amazon S3 Glacier to provide at least one service with Write-Once-Read-Many (WORM) capabilities.\nThis will allow users to push their backups into tamper-proof storage, ensuring that once data is written, it cannot be modified.\nMore importers ! # We want more importers to ingest data from new data sources, and we already have ideas how to move forward with this to provide the most popular ones in a relatively short timeframe\u0026hellip; but at this point no spoil!\nEnterprise version # When the RC is released, our team will split so that we always have people focusing on the community version, and people working on the enterprise features that will complement it.\nThe enterprise version will provide all the features that dont make sense to most users for small setups, but that companies rely upon for accountability, regulatory requirements or simply convenience when dealing with a large number of servers.\n","date":"26 February 2025","externalUrl":null,"permalink":"/posts/2025-02-26/plakar-beta-release/","section":"Plakar Blog","summary":"First public beta of Plakar is out: scalable, encrypted, efficient, and open-source backups ready for real-world testing and feedback","title":"Plakar beta release!","type":"posts"},{"content":"In today’s digital landscape, where downtime can cost businesses thousands of dollars per minute, having a robust disaster recovery (DR) strategy is non-negotiable. Two fundamental metrics in any business continuity plan are recovery time objective (RTO) and recovery point objective (RPO). These determine how quickly a system can recover after a failure and how much data a company is willing to lose in the process.\nA low RTO means your business aims for fast recovery, ensuring minimal service disruption, while a low RPO means you prioritize frequent backups to prevent significant data loss. Understanding these concepts is crucial for designing an effective backup strategy that aligns with your risk tolerance, budget, and operational needs.\nNote: Both RTO (and TSO, as it is sometimes referred) must be defined by the business considering business impact but taking also into account technical constraints, such as the ability to perform backups at any moment in production, as well as the capacity and cost associated with backup storage.\nWhat is Recovery Time Objective (RTO)? # Definition and Explanation # Recovery Time Objective (RTO) is the maximum acceptable amount of time that a system, application, or business process can be down after a failure before causing significant business disruption. For example, if a company sets an RTO of four hours, it means their systems must be back online within four hours of a disruption to minimize operational impact.\nFactors Affecting RTO # Business Impact Analysis (BIA): Identifying mission-critical applications and their required uptime. System Redundancy: High-availability infrastructure can minimize recovery time. Backup and Recovery Methods: Automated failover, manual recovery procedures, or real-time data replication. Disaster Recovery Testing: Frequent testing ensures realistic RTO expectations. Air Gap Quality: In extreme cases, the RTO depends on the quality of your air gap. For example, if the last magnetic tapes used for backup remain in the same data center during a fire, the RTO is significantly degraded. What is Recovery Point Objective (RPO)? # Definition and Explanation # Recovery Point Objective (RPO) defines the maximum acceptable data loss measured in time. It determines how frequently backups should be taken. For instance, an RPO of 15 minutes means that data backups occur every 15 minutes, ensuring minimal loss in the event of a failure.\nFactors Affecting RPO # Data Change Frequency: High-volume databases or rapidly changing data require a lower RPO. Backup Strategy: The method chosen, such as scheduled backups or continuous replication, has a direct impact on the RPO. Storage and Cost Constraints: Achieving a low RPO generally requires more frequent backups, increasing storage costs. Technical Capabilities: The production environment’s ability to perform backups at any given moment is a critical factor. Continuous Data Protection (CDP) # While Continuous Data Protection (CDP) is often touted for achieving near-zero RPO through real-time data replication, it is important to note that CDP does not replace traditional backups. In incidents where the replicated data itself becomes compromised, traditional backups are essential. However, CDP can contribute to an improved RTO by allowing quicker recovery of the most recent data state.\nKey Differences Between RTO and RPO # Feature Recovery Time Objective (RTO) Recovery Point Objective (RPO) Definition Maximum allowable downtime Maximum acceptable data loss Measured in Time (minutes, hours) Time (minutes, hours) Business Impact Determines system recovery speed Defines backup frequency Example A four-hour RTO means systems must be restored within four hours A 15-minute RPO means backups occur every 15 minutes Cost Factor Lower RTO generally requires higher infrastructure costs Lower RPO generally demands more frequent backups and greater storage costs Impact on Business Continuity Planning # Both RTO and RPO must be carefully aligned with a company’s risk assessment and financial constraints. Achieving shorter RTO and RPO targets requires:\nFaster Recovery Solutions: Such as hot standby systems or real-time replication, keeping in mind that these solutions must be supported by the production environment’s backup capacity. Frequent Data Backups: To reduce the window of potential data loss, balanced with the cost and storage implications. Attention to Air Gap Integrity: Ensuring that backups are stored securely, preferably isolated from the primary data center, to protect against disasters like fires or other site-specific incidents. Setting the Right RTO and RPO for Your Business # Analyzing Business Requirements # Identify Mission-Critical Applications: Determine which applications, databases, and customer portals are essential. Perform a Risk Assessment: Define acceptable levels of downtime and data loss, taking into account both business impact and technical capabilities. Consider Technical Constraints: The ability to perform backups at any moment and the costs involved in maintaining backup storage are key factors. Creating a Disaster Recovery Plan # Align Backup Frequency with RPO: Ensure that the backup schedule meets the desired RPO. Implement Automated Failover Strategies: To support a lower RTO, while being aware of the limitations of replication solutions. Ensure Robust Air Gap Practices: Avoid storing critical backup media, such as magnetic tapes, in the same data center where they might be exposed to the same risk, for example during a fire. Testing and Validating Your RTO and RPO Goals # Conduct Regular Disaster Recovery Drills: To verify that the systems can meet the defined RTO and RPO. Measure Recovery Time Performance: Compare actual recovery times against your predefined objectives. Adjust Backup Strategies as Needed: Based on test results and evolving technical capabilities or business requirements. Key Takeaways # RTO defines how quickly you recover; RPO defines how much data you lose. Shorter RTO and RPO targets require more advanced backup solutions and robust technical capabilities. The business must define RTO (and TSO) in conjunction with technical constraints, including the ability to perform immediate backups and the associated storage costs. Continuous Data Protection (CDP) supports improved RTO but does not replace traditional backups. The quality of your air gap is crucial; for example, if backup media remain in a compromised data center, your RTO can be severely impacted. Regular testing and validation are essential to ensure that recovery goals are achievable. Conclusion # Understanding RTO and RPO is essential for designing an effective business continuity plan. By carefully defining these objectives, businesses can minimize downtime, reduce data loss, and maintain customer trust. It is vital that organizations evaluate not only their risk tolerance and budget constraints but also the technical capabilities of their production environments, such as the ability to perform continuous backups and the costs of backup storage.\nInvesting in the right backup strategies and ensuring robust practices like maintaining an effective air gap are key to achieving your recovery objectives. Remember, while Continuous Data Protection (CDP) can enhance recovery times, it must be integrated with traditional backup solutions to ensure comprehensive protection against data loss and system downtime.\nA proactive disaster recovery plan is the cornerstone of long-term business stability and operational success.\n","date":"12 February 2025","externalUrl":null,"permalink":"/posts/2025-02-12/understanding-rto-and-rpo-in-disaster-recovery/","section":"Plakar Blog","summary":"Learn how RTO and RPO define recovery speed and data loss tolerance—essential metrics for building resilient disaster recovery strategies","title":"Understanding RTO and RPO in disaster recovery","type":"posts"},{"content":"Data loss can happen in many ways: whether due to accidental deletion, cyberattacks, hardware failure, or even a catastrophic event like a data center fire. To protect against these risks, IT professionals have long relied on the 3-2-1 backup rule, a fundamental strategy for ensuring data resilience.\nThis article breaks down what the 3-2-1 backup rule is, why it is critical, and why replication or single-cloud backups are not enough. We also explore the types of threats it mitigates, from hacker intrusions to storage provider failures, and how to implement it effectively with proper offline or air-gapped backups.\nWhat is the 3-2-1 backup rule? # The 3-2-1 backup rule is a best-practice guideline for data redundancy and disaster recovery. It ensures that organizations maintain sufficient copies of their data to minimize the risk of total data loss.\nThe core principle of 3-2-1 # The rule dictates that you should:\nKeep at least 3 copies of your data (one primary plus two backups). Store backups on at least 2 different types of media (for example, a local disk and cloud storage, or a local NAS and tape). Ensure 1 backup copy is off-site (in a different location or cloud service, ideally offline or air-gapped). This approach guarantees that even if one or two copies are lost, a third copy remains accessible for recovery.\nExample of a proper 3-2-1 backup implementation # Let us say you run a critical business application storing important customer data:\nPrimary Copy – The data lives on your production server (for example, an on-premise storage system or S3 storage). Secondary Copy – A backup is stored on a separate NAS, another cloud storage, or a different disk-based system. Tertiary Copy (Off-Site and Air-Gapped) – A cloud backup stored in AWS Glacier with a multi-day deletion delay, or a tape backup stored in a secure facility with robotic retrieval. This ensures that even if your main server fails, your local backup is corrupted, or your cloud provider is compromised, an air-gapped copy remains protected.\nWhy is it important to implement the 3-2-1 backup rule? # A single backup is never enough because data loss comes from many unpredictable sources, including:\nHuman errors – Accidental file deletions or unintended data overwrites. Hardware failures – Disk crashes, server failures, or RAID corruption. Cyber threats – Ransomware, malware, or hacker intrusions. Administrative mistakes – Accidental database deletions or misconfigurations. Cloud service failures – Unexpected outages or accidental deletions by providers. Physical disasters – Fires, floods, earthquakes, or power failures. Rogue admins – Malicious insiders deleting backups or modifying retention policies. A multi-layered backup strategy like 3-2-1 ensures that even if one or two of these failures occur, you still have a recoverable copy of your data.\nWhy replication is not a backup # One common misconception is that replication can replace backup. This is not true. While replication is useful for availability, it does not protect against data corruption, accidental deletions, or cyberattacks.\nKey differences between backup and replication # Aspect Backup Replication Purpose Disaster recovery High availability Retention Keeps historical versions Only keeps the latest version Data corruption protection Older copies remain untouched Corruption is replicated immediately Protection from human error Can restore from a clean backup Deletes or mistakes are instantly mirrored Protection from ransomware Can recover from an old snapshot Ransomware spreads to replicated copies Why replication fails as a backup strategy # Imagine an admin accidentally deletes a critical database. If your system only uses replication:\nReplication immediately mirrors the deletion across all systems so that the data is lost everywhere. There is no historical backup to restore from, meaning you cannot go back in time. If ransomware encrypts files, the encrypted data is also replicated immediately. In contrast, a proper backup solution with versioning allows recovery from an earlier, uncorrupted state.\nRead more about this topic: Why replication is not backup?\nWhy a backup in the same cloud account is not enough # Cloud services like AWS, Google Cloud, and Azure offer native snapshots and backups. While these options seem convenient, relying solely on one cloud provider can be a serious mistake.\nThe risks of same-cloud backups # Hacker intrusions and ransomware If an attacker gains access to your cloud account, they can delete all snapshots and backups. Many cloud providers allow instant deletion of backups, making recovery difficult. Solution: Store off-site backups with multi-day deletion delays, such as AWS Glacier Vault Lock. Storage provider failures Storage provider-side failures can happen. For example, Amazon S3 experienced data loss incidents due to misconfigurations, and Google Cloud once accidentally deleted customer backups because of an internal process error. Solution: Store at least one copy on a separate cloud provider or on-premise tape storage. Account termination risks If your cloud provider suspends or terminates your account, you could lose access to both production and backup data stored in that same cloud. Solution: Store an additional copy in a different cloud provider or physical tape archive. Conclusion # The 3-2-1 backup rule remains a simple yet powerful strategy to protect against a wide range of data loss scenarios.\nReplication is not backup because it does not protect against accidental deletions, corruption, or ransomware. A single cloud backup is not enough since provider failures, rogue admins, or account terminations could lead to permanent data loss. Offline or air-gapped backups are critical. Tape storage or AWS Glacier with deletion locks ensures that backups cannot be easily deleted. By keeping multiple copies on different media, and one truly protected off-site, you ensure resilience against both human and technical failures. No matter the size of your organization, implementing a proper 3-2-1 backup strategy is essential to safeguard data against disaster.\nQuick takeaways # 3-2-1 backup rule fundamentals:\nMaintain at least three copies of your data on two different types of media, with one copy stored off-site or in an air-gapped environment. Backup vs. replication:\nReplication ensures high availability but mirrors errors and corruption immediately, making it insufficient as a standalone backup strategy. Comprehensive threat protection:\nA robust backup strategy defends against human error, hardware failures, cyberattacks (including ransomware), and physical disasters. Limitations of cloud-only backups:\nRelying solely on one cloud provider can expose your data to risks such as security breaches, misconfigurations, or account terminations. Importance of offline/air-gapped backups:\nOffline or air-gapped backups (for example, tape storage or AWS Glacier with deletion locks) are critical to prevent accidental or malicious data deletions. Ensuring data resilience:\nA multi-layered backup approach, as outlined by the 3-2-1 rule, guarantees that even if one or more copies are lost, your data remains recoverable. ","date":"11 February 2025","externalUrl":null,"permalink":"/posts/2025-02-11/the-3-2-1-backup-rule-a-proven-strategy-for-data-protection/","section":"Plakar Blog","summary":"Discover why the 3-2-1 backup rule remains the gold standard for protecting your data from deletion, disasters, and ransomware","title":"The 3-2-1 backup rule: A proven strategy for data protection","type":"posts"},{"content":"Let us get straight to the point: Amazon S3 is a phenomenal service for scalable, reliable object storage but it is not a backup solution. Sure, S3 boasts rock-solid durability and cost efficiency, but relying on it alone for backups is like trying to cover your bases with duct tape. In today’s world, where a single misclick can spell disaster, a thoughtful, multi-layered backup strategy is not just nice to have; it is absolutely essential.\nThis article digs into the reasons why S3’s native features do not suffice when it comes to safeguarding your data. We expose the design limitations of S3 for backup tasks, compare it with dedicated backup solutions, and highlight real-world scenarios that illustrate these challenges. Along the way, we share best practices, practical examples, and a few tongue-in-cheek observations about the perils of relying on S3 as your one-and-only data safeguard. If you are serious about data protection, prepare to rethink your backup strategy.\nUnderstanding S3: its strengths and its intended purpose # Amazon S3 was built to be a high-availability, scalable object storage service. It is designed to handle immense data loads for applications that demand immediate access. It is brilliant at what it does, but it was never designed to be the all-in-one solution for backups.\nWhat is Amazon S3? # At its core, S3 is an object storage system. You drop files into buckets and retrieve them whenever you need them. Its architecture is optimized for durability by distributing data across multiple physical sites. In other words, if a hard drive fails in one location, your data remains safe elsewhere. However, this setup is intended for live data access and distribution, not for managing the nuanced requirements of backups.\nS3’s features such as lifecycle policies, access control lists, and even versioning are powerful, yet they are not built for the kind of point-in-time recoveries or granular data management that a true backup solution demands. S3’s design prioritizes scale and accessibility over the precision and control that backups require. It is like using a fire hose to water your garden: effective for one purpose, but not ideal for another.\nWhy S3 is not meant to be a backup # The reality is that S3 was never designed with a backup mindset. When backing up data, you are not just storing files; you are preparing for worst-case scenarios such as accidental deletion, malicious actions, or even regional disasters. For example, S3’s eventual consistency model means that changes might not immediately reflect across all copies. In a backup scenario, that delay can turn a near-instant restore into a waiting game that could cost you dearly.\nMoreover, S3’s versioning, while useful for retrieving older copies, is not foolproof. If all versions are deleted at once, you are in trouble. Additionally, features like MFA delete make the process of removing unwanted files cumbersome, and Object Lock can restrict deletion permanently, which is not always desirable. S3 was built to store data reliably, not to manage the intricacies of a backup cycle.\nMany cloud services tout extreme durability, but it is important to remember that durability is not the same as recoverability. S3 excels at keeping your data safe from hardware failures, but it does not protect you from human error, configuration mistakes, or targeted attacks. This is why you need a backup strategy that addresses these challenges.\nThe real-world limitations of using S3 as a backup # Relying on S3 as your sole backup solution is a risky proposition. It is not that S3 loses data; rather, it is not built for the specific challenges of backup and recovery. Let us examine the practical limitations.\nData consistency and recovery challenges # Imagine you accidentally delete a critical file and expect S3’s versioning to rescue you. In theory, it might help, but S3 uses an eventual consistency model for certain operations. This means that immediately after a change, not all copies of your data may be updated. In a scenario where every second matters, this delay can lead to inconsistencies in the recovered data.\nConsider a situation where an application update inadvertently overwrites the latest version of a critical file. With S3, versioning might help, but only if you can roll back quickly and if the previous version is intact. More often than not, recovery becomes a tedious process. Recovery is not just about retrieving the latest copy of a file; it is about ensuring every piece of data is exactly where it should be, a task S3 is not optimized for.\nSecurity and compliance limitations # When protecting vital data, security is not optional; it is a mandate. Although S3 supports encryption and access control, setting these features up correctly can be challenging. A minor misconfiguration may leave your backup data exposed to malicious actors. Traditional backup solutions are designed with integrated security protocols that ensure data remains encrypted both in transit and at rest with minimal effort. S3, on the other hand, requires continuous attention to maintain proper security.\nCompliance is another concern. For industries with strict regulatory requirements, S3’s native security settings may not be sufficient. Standards such as HIPAA, GDPR, or PCI-DSS often require detailed audit trails, comprehensive access logs, and advanced encryption methods. Achieving these with S3 demands significant time and resource investment. While S3 may have impressive durability numbers, its security capabilities are limited when it comes to comprehensive backup needs.\nVersioning and deletion: a double-edged sword # S3 versioning is often viewed as a safety net for backups. In practice, however, it can work against you. Versioning allows you to retrieve older copies of your objects, but it also leaves you vulnerable if all versions are accidentally or maliciously deleted. MFA delete is intended to offer extra protection, but it can make even intentional deletions more complicated. Object Lock might seem like a solution for compliance, but it also means you can never completely remove the data if necessary.\nThe features that S3 provides to help with data recovery can sometimes hinder recovery efforts in a crisis. The backup world requires both durability and flexibility, along with rapid recovery capabilities. S3’s design falls short in this regard, often leaving you with a solution that works under ideal conditions but may fail when you need it most.\nComparing S3 with dedicated backup solutions # When it comes to data protection, you have two choices: force S3 into a role it was not designed for or use tools built specifically for backup. Here is a comparison of these options.\nTailored backup features versus S3\u0026rsquo;s generalist approach # Dedicated backup solutions are engineered for backup and recovery. They offer features such as incremental and differential backups, automated snapshotting, and rapid point-in-time restores. These systems are built with the assumption that mistakes will happen, whether due to human error or unforeseen issues, and they are designed to minimize downtime and data loss.\nS3, by contrast, is a general-purpose storage service. It reliably stores data but does not handle the nuances of backup cycles, retention policies, or quick recovery times. For instance, a dedicated backup system can restore a single file from a specific moment in time, while with S3, you may have to manually search through multiple versions. When disaster strikes, it is not as simple as instructing S3 to \u0026ldquo;roll back\u0026rdquo; and expect everything to be restored instantly.\nCost, complexity, and management overhead # At first glance, S3 may seem like a cheaper option due to its pay-as-you-go pricing. However, when you factor in the additional software, manual processes, and ongoing monitoring needed to make S3 work as a backup, the costs can quickly add up. Dedicated backup solutions come with integrated management interfaces, reporting tools, and automated recovery procedures that simplify operations and reduce the risk of human error.\nThe management overhead is not only a financial concern; it is also a matter of time and effort. Keeping track of encryption keys, version histories, and access policies in S3 can become a logistical challenge. In contrast, a dedicated backup system is designed to integrate seamlessly with your workflows, allowing you to focus on ensuring your data is restorable when you need it most.\nBest practices for a rock-solid backup strategy # No one is immune to mistakes. Fat-fingered deletions, configuration errors, and unforeseen mishaps are all part of managing data. That is why you need a backup strategy that is as layered as your overall security measures. Here are some best practices for building a robust backup system.\nEmbracing a multi-layered backup approach # Relying on a single backup method is a recipe for disaster. Instead, adopt a multi-layered strategy. S3 is excellent for storing massive amounts of data economically, but for critical data, you need multiple copies in different locations. Use local backups for rapid recovery, integrate cloud-native backup tools for continuous data protection, and consider offsite backups with other providers for additional security.\nSome organizations use S3 for archival purposes while relying on dedicated backup appliances or software for daily snapshots and rapid restores. This redundancy ensures that if one backup fails, another layer is ready to take over.\nLeveraging the right tools for the job # Not all backup tools are created equal. Choose systems that offer automated testing, granular recovery options, and seamless integration with your existing infrastructure. Whether you opt for a commercial backup solution or an open-source alternative, make sure it supports features like incremental backups, easy-to-use dashboards, and robust encryption. The goal is to create a system where every piece of data can be tracked, restored, and verified without having to perform complex maneuvers in a crisis.\nLessons from real-world case studies # Real-world experiences provide valuable lessons. Many organizations have discovered, often the hard way, that relying solely on S3 can lead to prolonged downtime and painful recovery processes. For example, one mid-sized firm experienced a major data loss due to accidental mass deletion. They mitigated the impact by integrating S3 with a dedicated backup solution, which not only reduced recovery times but also improved overall data governance. Regular testing of backup processes can reveal weaknesses before a real crisis hits and ensure that when mistakes occur, your data is safe and recoverable.\nQuick takeaways # S3 is excellent for scalable, high-availability object storage, but it is not a backup solution. S3\u0026rsquo;s eventual consistency and versioning can create significant recovery challenges. Security configurations in S3 require constant vigilance to protect sensitive backup data. Dedicated backup solutions offer granular recovery, automated testing, and true point-in-time restores. A multi-layered backup strategy that includes local, cloud, and offsite backups minimizes risk. Conclusion # In summary, while Amazon S3 is a robust platform for storing large amounts of data with impressive durability, it is not engineered to serve as a comprehensive backup solution. S3\u0026rsquo;s architecture emphasizes high availability and cost efficiency, not the nuanced demands of rapid recovery, granular version control, or robust security in backup scenarios. Relying solely on S3 for backup is similar to using a reliable delivery truck as an armored vault; it transports your data effectively but is not designed to handle every contingency.\nA thoughtful backup strategy requires multiple layers: local backups for speed, cloud backups for redundancy, and offsite solutions for additional security. Integrating dedicated backup tools alongside S3 can help prevent the issues of accidental deletions, malicious actions, and misconfigurations that could lead to catastrophic data loss. Investing in a comprehensive backup solution is essential because when it comes to protecting critical data, durability alone is not enough.\nFAQs # 1. Why is S3 not enough as a standalone backup solution?\nS3 is designed for high-availability object storage rather than the nuanced requirements of backups such as point-in-time recovery, incremental backups, or granular restoration. Its eventual consistency model may delay recovery, making it unsuitable for critical backup needs.\n2. Can S3\u0026rsquo;s versioning be used effectively for backups?\nWhile S3 versioning can help recover older copies of objects, it is not foolproof. Accidental or malicious deletion of all versions can leave you without a fallback, and features like MFA delete complicate the process further.\n3. How do dedicated backup solutions compare to using S3 alone?\nDedicated backup solutions offer automated snapshotting, incremental backups, and rapid recovery options specifically tailored for disaster scenarios. They also include robust encryption and management features that make data restoration simpler and more reliable.\n4. What is a multi-layered backup strategy?\nA multi-layered backup strategy combines various methods—local backups for fast recovery, cloud-based backups for redundancy, and offsite solutions for disaster resilience—to ensure that if one layer fails, other copies remain available.\n5. How can I integrate S3 with a dedicated backup solution?\nMany modern backup platforms provide seamless integration with S3. These solutions use S3 for cost-effective archival storage while managing real-time backups and rapid restores through specialized software. This hybrid approach leverages the strengths of S3 without exposing you to its limitations.\n","date":"10 February 2025","externalUrl":null,"permalink":"/posts/2025-02-10/s3-is-not-a-backup-why-you-need-a-real-backup-strategy/","section":"Plakar Blog","summary":"S3 offers durable storage, not true backups. Learn why you need dedicated tools for secure, recoverable, and resilient data protection","title":"S3 is not a backup: why you need a real backup strategy","type":"posts"},{"content":"In the realm of data protection, backup and replication are two fundamental strategies employed to safeguard information. While they share the common goal of data preservation, they operate on distinct principles and serve different purposes. Understanding these differences is crucial for developing a robust data protection strategy.\nUnderstanding backup and replication # What is data backup? # Data backup involves creating copies of data at specific points in time that can be restored in the event of data loss or corruption. These backups are typically stored separately from the original data, often offsite or in the cloud, to protect against disasters. Backups can be full, incremental, or differential depending on the organization\u0026rsquo;s needs.\nWhat is data replication? # Data replication entails creating and maintaining duplicate copies of data across multiple locations or systems. This process ensures that data is continuously available and accessible even if one system fails. Replication can be synchronous, where data is copied in real time, or asynchronous, where data is copied at scheduled intervals.\nKey differences between backup and replication # Purpose and objectives # Backup: Provides restore points for recovering data after loss or corruption. Replication: Ensures continuous availability by maintaining real-time copies of data. Data consistency and recovery # Backup: Lets you restore data to a specific point in time, making it ideal for recovering from accidental deletions or corruption. Replication: Keeps copies consistent with the original but does not offer historical versions for recovery. Impact on performance # Backup: Typically scheduled during off-peak hours to minimize impact on performance. Replication: Continuously updates data, which can affect system performance, especially in high-volume environments. Use cases # Backup: Best for long-term data retention, compliance, and protection against accidental data loss. Replication: Ideal for mission-critical systems requiring high availability and rapid recovery. Complementary roles in data protection # While backup and replication serve different purposes, they are complementary components of a comprehensive data protection strategy. Combining both ensures that data is both readily available and protected against various threats.\nExample: iCloud and the difference between backup and replication # Consider the example of iCloud to illustrate the difference between replication and backup. iCloud replicates the photos on your phone to the cloud, but it does not create traditional backups of them.\nScenario 1: Accidental deletion by a child # Imagine you leave your phone unattended and your child begins to explore it. In their curiosity, they accidentally delete some important photos. Since iCloud replicates the photos in real time, these deletions are immediately reflected in your iCloud storage. In this case, the deletion is instantly replicated to iCloud, and because iCloud does not keep historical versions, the deleted photos are permanently lost. If you\u0026rsquo;re lucky, you might be able to find these photos in your trash, in the Recently Deleted section on iCloud. However, this option is only available for a limited time. If the deletion occurred several months ago, the photos may have already been permanently removed from your trash, making recovery very difficult or even impossible. In that case, if you don\u0026rsquo;t have a backup elsewhere, such as an external hard drive or another backup service, you may lose those precious memories for good.\nScenario 2: Data corruption from a third-party app # Suppose you download a third-party app to edit your photos. Although the app appears safe, it malfunctions or contains a bug that causes some of your photos to become corrupted. After syncing with iCloud, the corrupted photos are also replicated to the cloud. Replication synchronizes the corrupted data, and since iCloud does not allow you to restore previous versions of the photos, the damage is irreversible. With a backup strategy in place, you could easily restore the original, uncorrupted photos.\nScenario 3: Data loss after a breakup # Imagine a personal scenario where, after a breakup, your ex-partner who still has access to your iCloud account or your phone decides to delete all of your shared photos. Because iCloud replicates changes made on the phone, any deletions are immediately reflected on the cloud. iCloud does not provide a way to roll back the deletions, and the photos are permanently gone. However, if you had a separate backup, such as a hard drive backup or another cloud service, you could have recovered those precious memories even after this unexpected event.\nConclusion # In the world of data protection, backup remains an indispensable component. Even the most robust replication strategy cannot replace the need for backups. Replication keeps data available in real time but fails to protect against scenarios such as human error, corruption, or malicious actions. Such errors are simply mirrored across your replicated systems.\nThink of replication as a safety net that ensures continuous data access. Without backups, however, this net does not catch your mistakes. Backups serve as a safety vault that allows you to recover data to a specific point in time and prevents the irreversible loss of critical information. Relying solely on replication, without the safeguard of backups, leaves data vulnerable to irreparable damage whether due to accidental deletions, software bugs, or cyber threats.\nFor organizations and individuals alike, it is crucial to understand that replication cannot replace backups. A proper data protection strategy requires both to truly secure valuable information.\nQuick takeaways # Backup creates copies of data at specific points in time for recovery purposes. Replication maintains real-time copies of data across multiple locations for high availability. Backups are cost-effective and suitable for long-term data retention. Replication requires significant infrastructure investment and can impact system performance. Combining both strategies enhances data protection and recovery capabilities. FAQs # Can replication replace backups?\nNo, replication ensures data availability but does not provide historical recovery points like backups do.\nHow does replication affect system performance?\nContinuous data replication can consume system resources and may impact performance, especially in high-volume environments.\nIs replication more expensive than backup?\nYes, replication typically requires more infrastructure and storage, making it more costly than traditional backup solutions.\nCan replication be used for disaster recovery?\nYes, replication is a key component of disaster recovery plans, ensuring data availability in case of system failures.\nHow often should backups and replications be performed?\nBackups should be scheduled based on data change frequency and compliance requirements, while replication frequency depends on the criticality of the data and the organization\u0026rsquo;s recovery objectives.\n","date":"10 February 2025","externalUrl":null,"permalink":"/posts/2025-02-10/why-replication-is-not-backup/","section":"Plakar Blog","summary":"Replication ensures availability, not recovery. Learn why true backups remain essential to protect against deletion, corruption, or malicious actions","title":"Why replication is not backup","type":"posts"},{"content":"","externalUrl":null,"permalink":"/solutions/aws/","section":"Solutions","summary":"","title":"AWS","type":"solutions"},{"content":"","externalUrl":null,"permalink":"/branding/","section":"Plakar | The Open Standard for Backup and Restore","summary":"","title":"Branding","type":"page"},{"content":"","externalUrl":null,"permalink":"/community/","section":"Plakar | The Open Standard for Backup and Restore","summary":"","title":"Community","type":"page"},{"content":"","externalUrl":null,"permalink":"/solutions/compare/","section":"Solutions","summary":"","title":"Compare","type":"solutions"},{"content":"","externalUrl":null,"permalink":"/contact/","section":"Plakar | The Open Standard for Backup and Restore","summary":"","title":"Contact","type":"page"},{"content":"","externalUrl":null,"permalink":"/solutions/cost-efficiency/","section":"Solutions","summary":"","title":"Cost Efficiency","type":"solutions"},{"content":"","externalUrl":null,"permalink":"/docs/","section":"Docs","summary":"","title":"Docs","type":"docs"},{"content":"","externalUrl":null,"permalink":"/solutions/on-premises/","section":"Solutions","summary":"","title":"On-Premises","type":"solutions"},{"content":"","externalUrl":null,"permalink":"/download/v1.0.4/","section":"Download Plakar","summary":"Official binaries and packages for Plakar v1.0.4 with integrity verification instructions.","title":"Plakar v1.0.4","type":"download"},{"content":"","externalUrl":null,"permalink":"/download/v1.0.5/","section":"Download Plakar","summary":"Official binaries and packages for Plakar v1.0.5 with integrity verification instructions.","title":"Plakar v1.0.5","type":"download"},{"content":"","externalUrl":null,"permalink":"/download/v1.0.6/","section":"Download Plakar","summary":"Official binaries and packages for Plakar v1.0.6 with integrity verification instructions.","title":"Plakar v1.0.6","type":"download"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"},{"content":"","externalUrl":null,"permalink":"/solutions/","section":"Solutions","summary":"","title":"Solutions","type":"solutions"}]