Introduction to Kanidm

Kanidm is an identity management server, acting as an authority on accounts and authorisation within a technical environment.

The intent of the Kanidm project is to:

  • Provide a single truth source for accounts, groups and privileges.
  • Enable integrations to systems and services so they can authenticate accounts.
  • Make system, network, application and web authentication easy and accessible.

NOTICE: This is a pre-release project. While all effort has been made to ensure no data loss or security flaws, you should still be careful when using this in your environment.

Library documentation

Looking for the rustdoc documentation for the libraries themselves? Click here!

Why do I want Kanidm?

Whether you work in a business, a volunteer organisation, or are an enthusiast who manages their personal services, we need methods of authenticating and identifying ourselves to these systems and subsequently, ways to determine what authorisation and privileges we have while accessing these systems.

We've probably all been in workplaces where you end up with multiple accounts on various systems - one for a workstation, different SSH keys for different tasks, maybe some shared account passwords. Not only is it difficult for people to manage all these different credentials and what they have access to, but it also means that sometimes these credentials have more access or privilege than they require.

Kanidm acts as a central authority of accounts in your organisation and allows each account to associate many devices and credentials with different privileges. An example of how this looks:

                           │                  ││
      ┌───────────────┬───▶│      Kanidm      │◀─────┬─────────────────────────┐
      │               │    │                  ├┘     │                         │
      │               │    └──────────────────┘      │                       Verify
 Account Data         │              ▲               │                       Radius
  References          │              │               │                      Password
      │               │              │               │                         │
      │               │              │               │                  ┌────────────┐
      │               │              │               │                  │            │
      │               │              │            Verify                │   RADIUS   │
┌────────────┐        │        Retrieve SSH     Application             │            │
│            │        │         Public Keys      Password               └────────────┘
│  Database  │        │              │               │                        ▲
│            │        │              │               │                        │
└────────────┘        │              │               │               ┌────────┴──────┐
       ▲              │              │               │               │               │
       │              │              │               │               │               │
┌────────────┐        │       ┌────────────┐  ┌────────────┐  ┌────────────┐  ┌────────────┐
│            │        │       │            │  │            │  │            │  │            │
│  Web Site  │        │       │    SSH     │  │   Email    │  │    WIFI    │  │    VPN     │
│            │        │       │            │  │            │  │            │  │            │
└────────────┘        │       └────────────┘  └────────────┘  └────────────┘  └────────────┘
       ▲              │              ▲               ▲               ▲               ▲
       │              │              │               │               │               │
       │              │              │               │               │               │
       │          Login To           │               │               │               │
   SSO/Oauth     Oauth/SSO       SSH Keys       Application        Radius         Radius
       │              │              │           Password         Password       Password
       │              │              │               │               │               │
       │              │              │               │               │               │
       │              │              │               │               │               │
       │              │        ┌──────────┐          │               │               │
       │              │        │          │          │               │               │
       └──────────────┴────────│  Laptop  │──────────┴───────────────┴───────────────┘
                               │          │
                               │   You    │

A key design goal is that you authenticate with your device in some manner, and then your device will continue to authenticate you in the future. Each of these different types of credential from SSH keys, application passwords, RADIUS passwords and others, are "things your device knows". Each password has limited capability, and can only access that exact service or resource.

This helps improve security; a compromise of the service or the network transmission does not grant you unlimited access to your account and all its privileges. As the credentials are specific to a device, if a device is compromised you can revoke its associated credentials. If a specific service is compromised, only the credentials for that service need to be revoked.

Due to this model, and the design of Kanidm to centre the device and to have more per-service credentials, workflows and automation are added or designed to reduce human handling. An example of this is the use of QR codes with deployment profiles to automatically enrol wireless credentials.

Installing the Server

NOTE Our preferred deployment method is in containers, the documentation assumes you're running in docker. Kanidm will run in traditional compute, and server builds are available for multiple platforms or you can build the binaries yourself.

Currently we have docker images for the server components. They can be found at:

You can fetch these by running the commands:

docker pull kanidm/server:latest
docker pull kanidm/radius:latest

If you wish to use an x86_64 cpu-optimised version (See System Requirements CPU), you should use:

docker pull kanidm/server:x86_64_latest

You may need to adjust your example commands throughout this document to suit.

Development Version

If you are interested in running the latest code from development, you can do this by changing the docker tag to kanidm/server:devel or kanidm/server:x86_64_v3_devel instead.

System Requirements


If you are using the x86_64 cpu-optimised version, you must have a CPU that is from 2013 or newer (Haswell, Ryzen). The following instruction flags are used.

cmov, cx8, fxsr, mmx, sse, sse2, cx16, sahf, popcnt, sse3, sse4.1, sse4.2, avx, avx2,
bmi, bmi2, f16c, fma, lzcnt, movbe, xsave

Older or unsupported CPU's may raise a SIGIL (Illegal Instruction) on hardware that is not supported by the project.

In this case, you should use the standard server:latest image.

In the future we may apply a baseline of flags as a requirement for x86_64 for the server:latest image. These flags will be:

cmov, cx8, fxsr, mmx, sse, sse2


Kanidm extensively uses memory caching, trading memory consumption to improve parallel throughput. You should expect to see 64KB of ram per entry in your database, depending on cache tuning and settings.


You should expect to use up to 8KB of disk per entry you plan to store. At an estimate 10,000 entry databases will consume 40MB, 100,000 entry will consume 400MB.

For best performance, you should use NVME or other Flash media.


You'll need a volume where you can place configuration, certificates, and the database:

docker volume create kanidmd

You should have a chain.pem and key.pem in your kanidmd volume. The reason for requiring TLS is explained in why tls. In summary, TLS is our root of trust between the server and clients, and a critical element of ensuring a secure system.

The key.pem should be a single PEM private key, with no encryption. The file content should be similar to:


The chain.pem is a series of PEM formatted certificates. The leaf certificate, or the certificate that matches the private key should be the first certificate in the file. This should be followed by the series of intermediates, and the final certificate should be the CA root. For example:

<leaf certificate>
<intermediate certificate>
[ more intermediates if needed ]
<ca/croot certificate>

HINT If you are using Let's Encrypt the provided files "fullchain.pem" and "privkey.pem" are already correctly formatted as required for Kanidm.

You can validate that the leaf certificate matches the key with the command:

# openssl rsa -noout -modulus -in key.pem | openssl sha1
# openssl x509 -noout -modulus -in chain.pem | openssl sha1

If your chain.pem contains the CA certificate, you can validate this file with the command:

openssl verify -CAfile chain.pem chain.pem

If your chain.pem does not contain the CA certificate (Let's Encrypt chains do not contain the CA for example) then you can validate with this command.

openssl verify -untrusted fullchain.pem fullchain.pem

NOTE Here "-untrusted" flag means a list of further certificates in the chain to build up to the root is provided, but that the system CA root should be consulted. Verification is NOT bypassed or allowed to be invalid.

If these verifications pass you can now use these certificates with Kanidm. To put the certificates in place you can use a shell container that mounts the volume such as:

docker run --rm -i -t -v kanidmd:/data -v /my/host/path/work:/work opensuse/leap:latest /bin/sh -c "cp /work/* /data/"

OR for a shell into the volume:

docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh

Continue on to Configuring the Server

Configuring the Server

You will also need a config file in the volume named server.toml (Within the container it should be /data/server.toml). Its contents should be as follows:

#   The webserver bind address. Will use HTTPS if tls_* is provided.
#   Defaults to ""
bindaddress = "[::]:8443"
#   The read-only ldap server bind address. The server will use LDAPS if tls_* is provided.
#   Defaults to "" (disabled)
# ldapbindaddress = "[::]:3636"
#   The path to the kanidm database.
db_path = "/data/kanidm.db"
#   If you have a known filesystem, kanidm can tune sqlite to match. Valid choices are:
#   [zfs, other]
#   If you are unsure about this leave it as the default (other). After changing this
#   value you must run a vacuum task.
#   - zfs:
#     * sets sqlite pagesize to 64k. You must set recordsize=64k on the zfs filesystem.
#   - other:
#     * sets sqlite pagesize to 4k, matching most filesystems block sizes.
# db_fs_type = "zfs"
#   The number of entries to store in the in-memory cache. Minimum value is 256. If unset
#   an automatic heuristic is used to scale this.
# db_arc_size = 2048
#   TLS chain and key in pem format. Both must be commented, or both must be present
# tls_chain = "/data/chain.pem"
# tls_key = "/data/key.pem"
#   The log level of the server. May be default, verbose, perfbasic, perffull
#   Defaults to "default"
# log_level = "default"
#   The origin for webauthn. This is the url to the server, with the port included if
#   it is non-standard (any port except 443)
# origin = ""
origin = ""
#   The role of this server. This affects features available and how replication may interact.
#   Valid roles are:
#   - WriteReplica
#     This server provides all functionality of Kanidm. It allows authentication, writes, and
#     the web user interface to be served.
#   - WriteReplicaNoUI
#     This server is the same as a WriteReplica, but does NOT offer the web user interface.
#   - ReadOnlyReplica
#     This server will not writes initiated by clients. It supports authentication and reads,
#     and must have a replication agreement as a source of it's data.
#   Defaults to "WriteReplica".
# role = "WriteReplica"
# [online_backup]
#   The path to the output folder for online backups
# path = "/var/lib/kanidm/backups/"
#   The schedule to run online backups - see
#   every day at 22:00 UTC (default)
# schedule = "00 22 * * *"
#    four times a day at 3 minutes past the hour, every 6th hours
# schedule = "03 */6 * * *"
#   Number of backups to keep (default 7)
# versions = 7

An example is located in examples/server.toml.

Then you can setup the initial admin account and initialise the database into your volume.

docker run --rm -i -t -v kanidmd:/data kanidm/server:latest /sbin/kanidmd recover_account -c /data/server.toml -n admin

You then want to set your domain name so that security principal names (spn's) are generated correctly. This domain name must match the url/origin of the server that you plan to use to interact with so that other features work correctly. It is possible to change this domain name later.

docker run --rm -i -t -v kanidmd:/data kanidm/server:latest /sbin/kanidmd domain_name_change -c /data/server.toml -n

Now we can run the server so that it can accept connections. This defaults to using -c /data/server.toml

docker run -p 8443:8443 -v kanidmd:/data kanidm/server:latest

Security Hardening

Kanidm ships with a secure-by-default configuration, however that is only as strong as the platform that Kanidm operates in. This could be your container environment or your Unix-like system.

This chapter will detail a number of warnings and security practices you should follow to ensure that Kanidm operates in a secure environment.

The main server is a high-value target for a potential attack, as Kanidm serves as the authority on identity and authorisation in a network. Compromise of the Kanidm server is equivalent to a full-network take over, AKA "game over".

The unixd resolver is also a high value target as it can be accessed to allow unauthorised access to a server, to intercept communications to the server, or more. This also must be protected carfully.

For this reason, Kanidm's components must be protected carefully. Kanidm avoids many classic attacks by being developed in a memory safe language, but risks still exist.

Startup Warnings

At startup Kanidm will warn you if the environment it is running in is suspicious or has risks. For example:

kanidmd server -c /tmp/server.toml
WARNING: permissions on /tmp/server.toml may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: /tmp/server.toml has 'everyone' permission bits in the mode. This could be a security risk ...
WARNING: /tmp/server.toml owned by the current uid, which may allow file permission changes. This could be a security risk ...
WARNING: permissions on ../insecure/ca.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: permissions on ../insecure/cert.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: permissions on ../insecure/key.pem may not be secure. Should be readonly to running uid. This could be a security risk ...
WARNING: ../insecure/key.pem has 'everyone' permission bits in the mode. This could be a security risk ...
WARNING: DB folder /tmp has 'everyone' permission bits in the mode. This could be a security risk ...

Each warning highlights an issue that may exist in your environment. It is not possible for us to prescribe an exact configuration that may secure your system. This is why we only present possible risks.

Should be readonly to running uid

Files such as configurations should be readonly to this UID/GID. This is so that if an attacker is able to gain code execution, they are unable to modify the configuration to write or over-write files in other locations, or to tamper with the systems configuration.

This can be prevented by changing the files ownership to another user, or removing "write" bits from the group.

'everyone' permission bits in the mode

This means that given a permission mask, "everyone" or all users of the system can read, write or execute the content of this file. This may mean that if an account on the system is compromised the attacker can read Kanidm content and may be able to further attack the system as a result.

This can be prevented by removed everyone execute bits from parent directories containing the configuration, and removing everyone bits from the files in question.

owned by the current uid, which may allow file permission changes

File permissions in unix systems are a discrestionary access control system, which means the named uid owner is able to further modify the access of a file regardless of the current settings. For example:

[william@amethyst 12:25] /tmp > touch test
[william@amethyst 12:25] /tmp > ls -al test
-rw-r--r--  1 william  wheel  0 29 Jul 12:25 test
[william@amethyst 12:25] /tmp > chmod 400 test
[william@amethyst 12:25] /tmp > ls -al test
-r--------  1 william  wheel  0 29 Jul 12:25 test
[william@amethyst 12:25] /tmp > chmod 644 test
[william@amethyst 12:26] /tmp > ls -al test
-rw-r--r--  1 william  wheel  0 29 Jul 12:25 test

Notice that even though the file was set to "read only" to william, and no permission to any other users, that as "william" I can change the bits to add write permissions back or permissions for other users.

This can be prevent by making the file owner a different UID to the running process for kanidm.

A secure example

Between these three issues it can be hard to see a possible strategy to secure files, however one way exists - group read permissions. The most effective method to secure resources for kanidm is to set configurations to:

[william@amethyst 12:26] /etc/kanidm > ls -al server.toml
-r--r-----   1 root           kanidm      212 28 Jul 16:53 server.toml

The kanidm server should be run as "kanidm:kanidm" with the appropriate user and user private group created on your system. This applies to unixd configuration as well.

For the database your data folder should be:

[root@amethyst 12:38] /data/kanidm > ls -al .
total 1064
drwxrwx---   3 root     kanidm      96 29 Jul 12:38 .
-rw-r-----   1 kanidm   kanidm  544768 29 Jul 12:38 kanidm.db

IE this means 770 root:kanidm. This allows kanidm to create new files in the folder, but prevents kanidm being able to change the permissions of the folder. Because the folder does not have everyone mode bits, the content of the database is secure because users can now cd/read from the directory.

Configurations for clients such as /etc/kanidm/config should be secured with the permissions as:

[william@amethyst 12:26] /etc/kanidm > ls -al config
-r--r--r--    1 root  wheel    38 10 Jul 10:10 config

This file should be everyone-readable which is why the bits are defined as such.

NOTE: Why do you use 440 or 444 modes?

A bug exists in the implementation of readonly() in rust that checks this as "does a write bit exist for any user" vs "can the current uid write the file?". This distinction is subtle but it affects the check. We don't believe this is a significant issue though because setting these to 440 and 444 helps to prevent accidental changes by an administrator anyway

Running as non-root in docker

The commands provided in this book will run kanidmd as "root" in the container to make the onboarding smoother. However, this is not recommended in production for security reasons.

You should allocate a uidnumber/gidnumber for the service to run as that is unique on your host system. In this example we use 1000:1000

You will need to adjust the permissions on the /data volume to ensure that the process can manage the files. Kanidm requires the ability to write to the /data directory to create the sqlite files. This uid/gidnumber should match the above. You could consider the following changes to help isolate these changes:

docker run --rm -i -t -v kanidmd:/data opensuse/leap:latest /bin/sh
# mkdir /data/db/
# chown 1000:1000 /data/db/
# chmod 750 /data/db/
# sed -i -e "s/db_path.*/db_path = \"\/data\/db\/kanidm.db\"/g" /data/server.toml
# chown root:root /data/server.toml
# chmod 644 /data/server.toml

You can then use this with run the kanidm server in docker with a user.

docker run --rm -i -t -u 1000:1000 -v kanidmd:/data kanidm/server:latest /sbin/kanidmd ...

HINT You need to use the uidnumber/gidnumber to the -u argument, as the container can't resolve usernames from the host system.

Client tools

To interact with Kanidm as an administrator, you'll need to use our command line tools. If you haven't installed them yet, install them now.

Kanidm configuration

You can configure kanidm to help make commands simpler by modifying ~/.config/kanidm or /etc/kanidm/config.

uri = ""
verify_ca = true|false
verify_hostnames = true|false
ca_path = "/path/to/ca.pem"

Once configured, you can test this with:

kanidm self whoami --name anonymous

Session Management

To authenticate as a user for use with the command line, you need to use the login command to establish a session token.

kanidm login --name USERNAME
kanidm login --name admin

Once complete, you can use kanidm without reauthenticating for a period of time for administration.

You can list active sessions with:

kanidm session list

Sessions will expire after a period of time (by default 1 hour). To remove these expired sessions locally you can use:

kanidm session cleanup

To logout of a session:

kanidm logout --name USERNAME
kanidm logout --name admin

Installing Client Tools

NOTE As this project is in a rapid development phase, running different release versions will likely present incompatibilities. Ensure you're running the same release version of client/server binaries (eg. 1.1.0-alpha5, released 2021-07-07)

From packages

Kanidm currently supports:

  • OpenSUSE Tumbleweed
  • OpenSUSE Leap 15.3
  • Fedora 33/34

OpenSUSE Tumbleweed

Kanidm is part of OpenSUSE Tumbleweed since October 2020. This means you can install the clients with:

zypper ref
zypper in kanidm-clients

OpenSUSE Leap 15.3

Leap 15.3 is still not fully supported with Kanidm. For an experimental client, you can try the development repository. Using zypper you can add the repository with:

zypper ar -f obs://network:idm network_idm

Then you need to refresh your metadata and install the clients.

zypper ref
zypper in kanidm-clients


Fedora is still experimentally supported through the development repository. You need to add the repository metadata into the correct directory.

cd /etc/yum.repos.d
# 33
sudo wget
# 34
sudo wget

You can then install with:

sudo dnf install kanidm-clients

After you check out the source (see GitHub), navigate to:

cd kanidm_tools
cargo install --path .

Checking that the tools work

Now you can check your instance is working. You may need to provide a CA certificate for verification with the -C parameter:

kanidm login --name anonymous
kanidm self whoami -C ../path/to/ca.pem -H https://localhost:8443 --name anonymous
kanidm self whoami -H https://localhost:8443 --name anonymous

Now you can take some time to look at what commands are available - please ask for help at any time.

Accounts and groups

Accounts and Groups are the primary reason for Kanidm to exist. Kanidm is optimised as a repository for these data. As a result, they have many concepts and important details to understand.

Default Accounts and Groups

Kanidm ships with a number of default accounts and groups. This is to give you the best out of box experience possible, as well as supplying best practice examples related to modern IDM systems.

The system admin account (the account you recovered in the setup) has limited privileges - only to manage high-privilege accounts and services. This is to help separate system administration from identity administration actions. An idm_admin is also provided that is only for management of accounts and groups.

Both admin and idm_admin should NOT be used for daily activities - they exist for initial system configuration, and for disaster recovery scenarios. You should delegate permissions as required to named user accounts instead.

The majority of the provided content is privilege groups that provide rights over Kanidm administrative actions. These include groups for account management, person management (personal and sensitive data), group management, and more.

Recovering the Initial idm_admin Account

By default the idm_admin has no password, and can not be accessed. You should recover it with the admin (system admin) account. We recommend the use of "reset_credential" as it provides a high strength, random, machine only password.

kanidm account credential reset_credential  --name admin idm_admin
Generated password for idm_admin: tqoReZfz....

Creating Accounts

We can now use the idm_admin to create initial groups and accounts.

kanidm group create demo_group --name idm_admin
kanidm account create demo_user "Demonstration User" --name idm_admin
kanidm group add_members demo_group demo_user --name idm_admin
kanidm group list_members demo_group --name idm_admin
kanidm account get demo_user --name idm_admin

You can also use anonymous to view users and groups - note that you won't see as many fields due to the different anonymous access profile limits!

kanidm account get demo_user --name anonymous

Viewing Default Groups

You should take some time to inspect the default groups which are related to default permissions. These can be viewed with:

kanidm group list
kanidm group get <name>

Resetting Account Credentials

Members of the idm_account_manage_priv group have the rights to manage other users accounts security and login aspects. This includes resetting account credentials.

We can perform a password reset on the demo_user for example as idm_admin, who is a default member of this group.

kanidm account credential set_password demo_user --name idm_admin
kanidm self whoami --name demo_user

Nested Groups

Kanidm supports groups being members of groups, allowing nested groups. These nesting relationships are shown through the "memberof" attribute on groups and accounts.

Kanidm makes all group-membership determinations by inspecting an entries "memberof" attribute.

An example can be easily shown with:

kanidm group create group_1 --name idm_admin
kanidm group create group_2 --name idm_admin
kanidm account create nest_example "Nesting Account Example" --name idm_admin
kanidm group add_members group_1 group_2 --name idm_admin
kanidm group add_members group2 nest_example --name idm_admin
kanidm account get nest_example --name anonymous

Account Validity

Kanidm supports accounts that are only able to be authenticated between specific datetime windows. This takes the form of a "valid from" attribute that defines the earliest start date where authentication can succeed, and an expiry date where the account will no longer allow authentication.

This can be displayed with:

kanidm account validity show demo_user --name idm_admin
valid after: 2020-09-25T21:22:04+10:00
expire: 2020-09-25T01:22:04+10:00

These datetimes are stored in the server as UTC, but presented according to your local system time to aid correct understanding of when the events will occur.

To set the values, an account with account management permission is required (for example, idm_admin). Again, these values will correctly translated from the entered local timezone to UTC.

# Set the earliest time the account can start authenticating
kanidm account validity begin_from demo_user '2020-09-25T11:22:04+00:00' --name idm_admin
# Set the expiry or end date of the account
kanidm account validity expire_at demo_user '2020-09-25T11:22:04+00:00' --name idm_admin

To unset or remove these values the following can be used:

kanidm account validity begin_from demo_user any|clear --name idm_admin
kanidm account validity expire_at demo_user never|clear --name idm_admin

To "lock" an account, you can set the expire_at value to the past or unix epoch. Even in the situation where the "valid from" is after the expire_at, the expire_at will be respected.

kanidm account validity expire_at demo_user 1970-01-01T00:00:00+00:00 --name idm_admin

These validity settings impact all authentication functions of the account (kanidm, ldap, radius).

Why Can't I Change admin With idm_admin?

As a security mechanism there is a distinction between "accounts" and "high permission accounts". This is to help prevent elevation attacks, where say a member of a service desk could attempt to reset the password of idm_admin or admin, or even a member of HR or System Admin teams to move laterally.

Generally, membership of a "privilege" group that ships with Kanidm, such as:

  • idm_account_manage_priv
  • idm_people_read_priv
  • idm_schema_manage_priv
  • many more ...

Indirectly grants you membership to "idm_high_privilege". If you are a member of this group, the standard "account" and "people" rights groups are NOT able to alter, read or manage these accounts. To manage these accounts higher rights are required, such as those held by the admin account are required.

Further, groups that are considered "idm_high_privilege" can NOT be managed by the standard "idm_group_manage_priv" group.

Management of high privilege accounts and groups is granted through the the "hp" variants of all privileges. A non-conclusive list:

  • idm_hp_account_read_priv
  • idm_hp_account_manage_priv
  • idm_hp_account_write_priv
  • idm_hp_group_manage_priv
  • idm_hp_group_write_priv

Membership of any of these groups should be considered to be equivalent to system administration rights in the directory, and by extension, over all network resources that trust Kanidm.

All groups that are flagged as "idm_high_privilege" should be audited and monitored to ensure that they are not altered.

Administration Tasks

There are a number of tasks that you may wish to perform as an administrator of a service like Kanidm.

Backup and Restore

With any idm software, it's important you have the capability to restore in case of a disaster - be that physical damage or mistake. Kanidm supports backup and restore of the database with two methods.

Method 1

Method 1 involves taking a backup of the database entry content, which is then re-indexed on restore. This is the preferred method.

To take the backup (assuming our docker environment) you first need to stop the instance:

docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
    kanidm/server:latest /sbin/kanidmd backup -c /data/server.toml \
docker start <container name>

You can then restart your instance. DO NOT modify the backup.json as it may introduce data errors into your instance.

To restore from the backup:

docker stop <container name>
docker run --rm -i -t -v kanidmd:/data -v kanidmd_backups:/backup \
    kanidm/server:latest /sbin/kanidmd restore -c /data/server.toml \
docker start <container name>

That's it!

Method 2

This is a simple backup of the data volume.

docker stop <container name>
# Backup your docker's volume folder
docker start <container name>

Method 3

Automatic backups can be generated online by a kanidmd server instance by including the [online_backup] section in the server.toml. This allows you to run regular backups, defined by a cron schedule, and maintain the number of backup versions to keep. An example is located in examples/server.toml.

Rename the domain

There are some cases where you may need to rename the domain. You should have configured this initially in the setup, however you may have a situation where a business is changing name, merging, or other needs which may prompt this needing to be changed.

WARNING: This WILL break ALL u2f/webauthn tokens that have been enrolled, which MAY cause accounts to be locked out and unrecoverable until further action is taken. DO NOT CHANGE the domain_name unless REQUIRED and have a plan on how to manage these issues.

WARNING: This operation can take an extensive amount of time as ALL accounts and groups in the domain MUST have their SPN's regenerated. This will also cause a large delay in replication once the system is restarted.

You should take a backup before proceeding with this operation.

When you have a created a migration plan and strategy on handling the invalidation of webauthn, you can then rename the domain with the commands as follows:

docker stop <container name>
docker run --rm -i -t -v kandimd:/data \
    kanidm/server:latest /sbin/kanidmd domain_name_change -c /data/server.toml \
docker start <container name>

Reindexing after schema extension

In some (rare) cases you may need to reindex. Please note the server will sometimes reindex on startup as a result of the project changing its internal schema definitions. This is normal and expected - you may never need to start a reindex yourself as a result!

You'll likely notice a need to reindex if you add indexes to schema and you see a message in your logs such as:

Index EQUALITY name not found
Index {type} {attribute} not found

This indicates that an index of type equality has been added for name, but the indexing process has not been run. The server will continue to operate and the query execution code will correctly process the query - however it will not be the optimal method of delivering the results as we need to disregard this part of the query and act as though it's un-indexed.

Reindexing will resolve this by forcing all indexes to be recreated based on their schema definitions (this works even though the schema is in the same database!)

docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
    kanidm/server:latest /sbin/kanidmd reindex -c /data/server.toml
docker start <container name>

Generally, reindexing is a rare action and should not normally be required.


Vacuuming is the process of reclaiming un-used pages from the sqlite freelists, as well as performing some data reordering tasks that may make some queries more efficient . It is recommended that you vacuum after a reindex is performed or when you wish to reclaim space in the database file.

Vacuum is also able to change the pagesize of the database. After changing db_fs_type (which affects pagesize) in server.toml, you must run a vacuum for this to take effect.

docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
    kanidm/server:latest /sbin/kanidmd vacuum -c /data/server.toml
docker start <container name>


The server ships with a number of verification utilities to ensure that data is consistent such as referential integrity or memberof.

Note that verification really is a last resort - the server does a lot to prevent and self-heal from errors at run time, so you should rarely if ever require this utility. This utility was developed to guarantee consistency during development!

You can run a verification with:

docker stop <container name>
docker run --rm -i -t -v kanidmd:/data \
    kanidm/server:latest /sbin/kanidmd verify -c /data/server.toml
docker start <container name>

If you have errors, please contact the project to help support you to resolve these.

Raw actions

The server has a low-level stateful API you can use for more complex or advanced tasks on large numbers of entries at once. Some examples are below, but generally we advise you to use the APIs as listed above.

# Create from json (group or account)
kanidm raw create -H https://localhost:8443 -C ../insecure/ca.pem -D admin example.create.account.json
kanidm raw create  -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin

# Apply a json stateful modification to all entries matching a filter
kanidm raw modify -H https://localhost:8443 -C ../insecure/ca.pem -D admin '{"or": [ {"eq": ["name", "idm_person_account_create_priv"]}, {"eq": ["name", "idm_service_account_create_priv"]}, {"eq": ["name", "idm_account_write_priv"]}, {"eq": ["name", "idm_group_write_priv"]}, {"eq": ["name", "idm_people_write_priv"]}, {"eq": ["name", "idm_group_create_priv"]} ]}' example.modify.idm_admin.json
kanidm raw modify -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{"eq": ["name", "idm_admins"]}' example.modify.idm_admin.json

# Search and show the database representations
kanidm raw search -H https://localhost:8443 -C ../insecure/ca.pem -D admin '{"eq": ["name", "idm_admin"]}'

# Delete all entries matching a filter
kanidm raw delete -H https://localhost:8443 -C ../insecure/ca.pem -D idm_admin '{"eq": ["name", "test_account_delete_me"]}'

Monitoring the platform

The monitoring design of Kanidm is still very much in its infancy - take part in the dicussion here!.


kanidmd currently responds to HTTP GET requests at the /status endpoint with a JSON object of either "true" or "false". true indicates that the platform is responding to requests.

Example URL
Expected responseOne of either true or false (without quotes)
Additional Headersx-kanidm-opid
Content Typeapplication/json

Password Quality and Badlisting

Kanidm embeds a set of tools to help your users use and create strong passwords. This is important as not all user types will require MFA for their roles, but compromised accounts still pose a risk. There may also be deployment or other barriers to a site rolling out site wide MFA.

Quality Checking

Kanidm enforces that all passwords are checked by the library "zxcvbn". This has a large number of checks for password quality. It also provides constructive feedback to users on how to improve their passwords if it was rejected.

Some things that zxcvbn looks for is use of the account name or email in the password, common passwords, low entropy passwords, dates, reverse words and more.

This library can not be disabled - all passwords in Kanidm must pass this check.

Password Badlisting

This is the process of configuring a list of passwords to exclude from being able to be used. This is especially useful if a specific business has been notified of a compromised account, allowing you to maintain a list of customised excluded passwords.

The other value to this feature is being able to badlist common passwords that zxcvbn does not detect, or from other large scale password compromises.

By default we ship with a preconfigured badlist that is updated overtime as new password breach lists are made available.

Updating your own badlist.

You can update your own badlist by using the proided kanidm_badlist_preprocess tool which helps to automate this process.

Given a list of passwords in a text file, it will generate a modification set which can be applied. The tool also provides the command you need to run to apply this.

kanidm_badlist_preprocess -m -o /tmp/modlist.json <password file> [<password file> <password file> ...]

POSIX Accounts and Groups

Kanidm has features that enable its accounts and groups to be consumed on POSIX-like machines, such as Linux, FreeBSD, or others.

Notes on POSIX Features

Many design decisions have been made in the POSIX features of kanidm that are intended to make distributed systems easier to manage and client systems more secure.

UID and GID numbers

In Kanidm there is no difference between a UID and a GID number. On most UNIX systems a user will create all files with a primary user and group. The primary group is effectively equivalent to the permissions of the user. It is very easy to see scenarios where someone may change the account to have a shared primary group (ie allusers), but without changing the umask on all client systems. This can cause users' data to be compromised by any member of the same shared group.

To prevent this, many systems create a "user private group", or UPG. This group has the gidnumber matching the uidnumber of the user, and the user sets its primary group id to the gidnumber of the UPG.

As there is now an equivalence between the UID and GID number of the user and the UPG, there is no benefit to separating these values. As a result kanidm accounts only have a gidnumber, which is also considered to be its uidnumber as well. This has the benefit of preventing the accidental creation of a separate group that has an overlapping gidnumber (the uniqueness attribute of the schema will block the creation).

UPG generation

Due to the requirement that a user have a UPG for security, many systems create these as two independent items. For example in /etc/passwd and /etc/group

# passwd
# group

Other systems like FreeIPA use a plugin that generates a UPG as a database record on creation of the account.

Kanidm does neither of these. As the gidnumber of the user must be unique, and a user implies the UPG must exist, we can generate UPG's on-demand from the account. This has a single side effect - that you are unable to add any members to a UPG - given the nature of a user private group, this is the point.

gidnumber generation

In the future, Kanidm plans to have asynchronous replication as a feature between writable database servers. In this case, we need to be able to allocate stable and reliable gidnumbers to accounts on replicas that may not be in continual communication.

To do this, we use the last 32 bits of the account or group's UUID to generate the gidnumber.

A valid concern is the possibility of duplication in the lower 32 bits. Given the birthday problem, if you have 77,000 groups and accounts, you have a 50% chance of duplication. With 50,000 you have a 20% chance, 9,300 you have a 1% chance and with 2900 you have a 0.1% chance.

We advise that if you have a site with >10,000 users you should use an external system to allocate gidnumbers serially or consistently to avoid potential duplication events.

This design decision is made as most small sites will benefit greatly from the autoallocation policy and the simplicity of its design, while larger enterprises will already have IDM or Business process applications for HR/People that are capable of supplying this kind of data in batch jobs.

Enabling Posix Attributes

Enabling Posix Attributes on Accounts

To enable posix account features and ids on an account, you require the permission idm_account_unix_extend_priv. This is provided to idm_admins in the default database.

You can then use the following command to enable posix extensions.

kanidm account posix set --name idm_admin <account_id> [--shell SHELL --gidnumber GID]
kanidm account posix set --name idm_admin demo_user
kanidm account posix set --name idm_admin demo_user --shell /bin/zsh
kanidm account posix set --name idm_admin demo_user --gidnumber 2001

You can view the accounts posix token details with:

kanidm account posix show --name anonymous demo_user

Enabling Posix Attributes on Groups

To enable posix group features and ids on an account, you require the permission idm_group_unix_extend_priv. This is provided to idm_admins in the default database.

You can then use the following command to enable posix extensions.

kanidm group posix set --name idm_admin <group_id> [--gidnumber GID]
kanidm group posix set --name idm_admin demo_group
kanidm group posix set --name idm_admin demo_group --gidnumber 2001

You can view the accounts posix token details with:

kanidm group posix show --name anonymous demo_group

Posix enabled groups will supply their members as posix members to clients. There is no special or separate type of membership for posix members required.

Troubleshooting Common Issues

Subid conflicts with Podman

Due to the way that podman operates, in some cases using non-root containers with kanidm accounts may fail with an error such as:

ERRO[0000] cannot find UID/GID for user NAME: No subuid ranges found for user "NAME" in /etc/subuid

This is a fault in podman and how it attempts to provide non-root containers, when uid/gids are greater than 65535. In this case you may manually allocate your users gidnumber to be between 1000 - 65535 which may not have the fault.

SSH Key Distribution

To support SSH authentication securely to a large set of hosts running SSH, we support distribution of SSH public keys via the kanidm server.

Configuring accounts

To view the current ssh public keys on accounts, you can use:

kanidm account ssh list_publickeys --name <login user> <account to view>
kanidm account ssh list_publickeys --name idm_admin william

All users by default can self-manage their ssh public keys. To upload a key, a command like this is the best way to do so:

kanidm account ssh add_publickey --name william william 'test-key' "`cat ~/.ssh/`"

To remove (revoke) an ssh publickey, you delete them by the tag name:

kanidm account ssh delete_publickey --name william william 'test-key'

Security notes

As a security feature, kanidm validates all publickeys to ensure they are valid ssh publickeys. Uploading a private key or other data will be rejected. For example:

kanidm account ssh add_publickey --name william william 'test-key' "invalid"
Enter password:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Http(400, Some(SchemaViolation(InvalidAttributeSyntax)))', src/libcore/

Server Configuration

Public key caching configuration

If you have kanidm_unixd running, you can use it to locally cache ssh public keys. This means you can still ssh into your machines, even if your network is down, you move away from kanidm, or some other interruption occurs.

The kanidm_ssh_authorizedkeys command is part of the kanidm-unix-clients package, so should be installed on the servers. It communicates to kanidm_unixd, so you should have a configured PAM/nsswitch setup as well.

You can test this is configured correctly by running:

kanidm_ssh_authorizedkeys <account name>

If the account has ssh public keys you should see them listed, one per line.

To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to contain the lines:

PubkeyAuthentication yes
UsePAM yes
AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys %u
AuthorizedKeysCommandUser nobody

Restart sshd, and then attempt to authenticate with the keys.

It's highly recommended you keep your client configuration and sshd_configuration in a configuration management tool such as salt or ansible.

NOTICE: With a working SSH key setup, you should also consider adding the following sshd_config options as hardening.

PermitRootLogin no
PasswordAuthentication no
PermitEmptyPasswords no
GSSAPIAuthentication no
KerberosAuthentication no

Direct communication configuration

In this mode, the authorised keys commands will contact kanidm directly.

NOTICE: As kanidm is contacted directly there is no ssh public key cache. Any network outage or communication loss may prevent you accessing your systems. You should only use this version if you have a requirement for it.

The kanidm_ssh_authorizedkeys_direct command is part of the kanidm-clients package, so should be installed on the servers.

To configure the tool, you should edit /etc/kanidm/config, as documented in clients

You can test this is configured correctly by running:

kanidm_ssh_authorizedkeys_direct -D anonymous <account name>

If the account has ssh public keys you should see them listed, one per line.

To configure servers to accept these keys, you must change their /etc/ssh/sshd_config to contain the lines:

PubkeyAuthentication yes
UsePAM yes
AuthorizedKeysCommand /usr/bin/kanidm_ssh_authorizedkeys_direct -D anonymous %u
AuthorizedKeysCommandUser nobody

Restart sshd, and then attempt to authenticate with the keys.

It's highly recommended you keep your client configuration and sshd_configuration in a configuration management tool such as salt or ansible.

Recycle Bin

The recycle bin is a storage of deleted entries from the server. This allows recovery from mistakes for a period of time.

WARNING: The recycle bin is a best effort - when recovering in some cases not everything can be "put back" the way it was. Be sure to check your entries are valid once they have been revived.

Where is the Recycle Bin?

The recycle bin is stored as part of your main database - it is included in all backups and restores, just like any other data. It is also replicated between all servers.

How do things get into the Recycle Bin?

Any delete operation of an entry will cause it to be sent to the recycle bin. No configuration or specification is required.

How long do items stay in the Recycle Bin?

Currently they stay up to 1 week before they are removed.

Managing the Recycle Bin

You can display all items in the Recycle Bin with:

kanidm recycle_bin list --name admin

You can show a single items with:

kanidm recycle_bin get --name admin <id>

An entry can be revived with:

kanidm recycle_bin revive --name admin <id>

Edge cases

The recycle bin is a best effort to restore your data - there are some cases where the revived entries may not be the same as their were when they were deleted. This generally revolves around reference types such as group membership, or when the reference type includes supplemental map data such as the oauth2 scope map type.

An example of this data loss is the following steps:

add user1
add group1
add user1 as member of group1
delete user1
delete group1
revive user1
revive group1

In this series of steps, due to the way that referential integrity is implemented, the membership of user1 in group1 would be lost in this process. To explain why:

add user1
add group1
add user1 as member of group1 // refint between the two established, and memberof added
delete user1 // group1 removes member user1 from refint
delete group1 // user1 now removes memberof group1 from refint
revive user1 // re-add groups based on directmemberof (empty set)
revive group1 // no members

These issues could be looked at again in the future, but for now we think that deletes of groups is rare - we expect recycle bin to save you in "opps" moments, and in a majority of cases you may delete a group or a user and then restore them. To handle this series of steps requires extra code complexity in how we flag operations. For more, see This issue on github.


Oauth is a web authorisation protocol that allows "single sign on". It's key to note oauth is authorisation, not authentication, as the protocol in it's default forms do not provide identity or authentication information, only information that an entity is authorised for the requested resources.

Oauth can tie into extensions allowing an identity provider to reveal information about authorised sessions. This extends oauth from an authorisation only system to a system capable of identity and authorisation. Two primary methods of this exist today: rfc7662 token introspection, and openid connect.

How Does Oauth2 Work?

A user wishes to access a service (resource, resource server). The resource server does not have an active session for the client, so it redirects to the authorisation server (Kanidm) to determine if the client should be allowed to proceed, and has the appropriate permissions (scopes) for the requested resources.

The authorisation server checks the current session of the user and may present a login flow if required. Given the identity of the user known to the authorisation sever, and the requested scopes, the authorisation server makes a decision if it allows the authorisation to proceed. The user is then prompted to consent to the authorisation from the authorisation server to the resource server as some identity information may be revealed by granting this consent.

If successful and consent given, the user is redirected back to the resource server with an authorisation code. The resource server then contacts the authorisation server directly with this code and exchanges it for a valid token that may be provided to the users browser.

The resource server may then optionally contact the token introspection endpoint of the authorisation server about the provided oauth token, which yields extra metadata about the identity that holds the token from the authorisation. This metadata may include identity information, but also may include extended metadata, sometimes refered to as "claims". Claims are information bound to a token based on properties of the session that may allow the resource server to make extended authorisation decisions without the need to contact the authorisation server to arbitrate.

It's important to note that oauth2 at it's core is an authorisation system which has layered identity providing elements on top.

Resource Server

This is the server that a user wants to access. Common examples could be nextcloud, a wiki or something else. This is the system that "needs protecting" and wants to delegate authorisation decisions to Kanidm.

It's important for you to know how your resource server supports oauth2. For example, does it support rfc7662 token introspection or does it rely on openid connect for identity information? Does the resource server support PKCE or not?

In general Kanidm requires that your resource server supports:

  • HTTP basic authentication to the authorisation server
  • PKCE code verification to prevent certain token attack classes

Kanidm will expose it's oauth2 apis at the urls:

  • auth url:
  • token url:
  • token inspect url:

Scope Relationships

For an authorisation to proceed, the resource server will request a list of scopes, which are unique to that resource server. For example, when a user wishes to login to the admin panel of the resource server, it may request the "admin" scope from kanidm for authorisation. But when a user wants to login, it may only request "acces" as a scope from kanidm.

As each resource server may have it's own scopes and understanding of these, Kanidm isolates scopes to each resource server connected to Kanidm. Kanidm has two methods of granting scopes to accounts (users).

The first are implicit scopes. These are scopes granted to all accounts that Kanidm holds.

The second is scope mappings. These provide a set of scopes if a user is a member of a specific group within Kanidm. This allows you to create a relationship between the scopes of a resource server, and the groups/roles in Kanidm which can be specific to that resource server.

For an authorisation to proceed, all scopes requested must be available in the final scope set that is granted to the account. This final scope set can be built from implicit and mapped scopes.


Create the Kanidm Configuration

After you have understood your resource server requirements you first need to configure Kanidm. By default members of "system_admins" or "idm_hp_oauth2_manage_priv" are able to create or manage oauth2 resource server integrations.

You can create a new resource server with:

kanidm system oauth2 create <name> <displayname> <origin>
kanidm system oauth2 create nextcloud "Nextcloud Production"

If you wish to create implicit scopes you can set these with:

kanidm system oauth2 set_implicit_scopes <name> [scopes]...
kanidm system oauth2 set_implicit_scopes nextcloud login read_user

You can create a scope map with:

kanidm system oauth2 create_scope_map <name> <kanidm_group_name> [scopes]...
kanidm system oauth2 create_scope_map nextcloud nextcloud_admins admin

Once created you can view the details of the resource server.

kanidm system oauth2 get nextcloud
class: oauth2_resource_server
class: oauth2_resource_server_basic
class: object
displayname: Nextcloud Production
oauth2_rs_basic_secret: <secret>
oauth2_rs_name: nextcloud
oauth2_rs_token_key: hidden

Configure the Resource Server

On your resource server, you should configure the client id as the "oauth2_rs_name" from kanidm, and the password to be the value shown in "oauth2_rs_basic_secret"

You should now be able to test authorisation.

Resetting Resource Server Security Material

In the case of disclosure of the basic secret, or some other security event where you may wish to invalidate a resource servers active sessions/tokens, you can reset the secret material of the server with:

kanidm system oauth2 reset_secrets

Each resource server has unique signing keys and access secrets, so this is limited to each resource server.

PAM and nsswitch

PAM and nsswitch are the core mechanisms used by Linux and BSD clients to resolve identities from an IDM service like Kanidm into accounts that can be used on the machine for various interactive tasks.

The UNIX daemon

Kanidm provides a UNIX daemon that runs on any client that wants to use PAM and nsswitch integration. This is provided as the daemon can cache the accounts for users who have unreliable networks or leave the site where Kanidm is. The cache is also able to cache missing-entry responses to reduce network traffic and main server load.

Additionally, the daemon means that the PAM and nsswitch integration libraries can be small, helping to reduce the attack surface of the machine. Similarly, a tasks daemon is available that can create home directories on first login and supports several features related to aliases and links to these home directories.

We recommend you install the client daemon from your system package manager.

# OpenSUSE
zypper in kanidm-unixd-clients
# Fedora
dnf install kanidm-unixd-clients

You can check the daemon is running on your Linux system with

systemctl status kanidm-unixd

You can check the privileged tasks daemon is running with

systemctl status kanidm-unixd-tasks

NOTE The kanidm_unixd_tasks daemon is not required for PAM and nsswitch functionality. If disabled, your system will function as usual. It is however recommended due to the features it provides supporting Kanidm's capabilities.

Both unixd daemons use the connection configuration from /etc/kanidm/config. This is the covered in client_tools.

You can also configure some unixd specific options with the file /etc/kanidm/unixd.

pam_allowed_login_groups = ["posix_group"]
default_shell = "/bin/sh"
home_prefix = "/home/"
home_attr = "uuid"
home_alias = "spn"
uid_attr_map = "spn"
gid_attr_map = "spn"

The pam_allowed_login_groups defines a set of posix groups where membership of any of these groups will be allowed to login via PAM. All posix users and groups can be resolved by nss regardless of PAM login status. This may be a group name, spn or uuid.

default_shell is the default shell for users with none defined. Defaults to /bin/sh.

home_prefix is the prepended path to where home directories are stored. Must end with a trailing /. Defaults to /home/.

home_attr is the default token attribute used for the home directory path. Valid choices are uuid, name, spn. Defaults to uuid.

home_alias is the default token attribute used for generating symlinks pointing to the users home directory. If set, this will become the value of the home path to nss calls. It is recommended you choose a "human friendly" attribute here. Valid choices are none, uuid, name, spn. Defaults to spn.

NOTICE: All users in Kanidm can change their name (and their spn) at any time. If you change home_attr from uuid you must have a plan on how to manage these directory renames in your system. We recommend that you have a stable id (like the uuid) and symlinks from the name to the uuid folder. Automatic support is provided for this via the unixd tasks daemon, as documented here.

uid_attr_map chooses which attribute is used for domain local users in presentation. Defaults to spn. Users from a trust will always use spn.

gid_attr_map chooses which attribute is used for domain local groups in presentation. Defaults to spn. Groups from a trust will always use spn.

You can then check the communication status of the daemon as any user account.

$ kanidm_unixd_status

If the daemon is working, you should see:

[2020-02-14T05:58:37Z INFO  kanidm_unixd_status] working!

If it is not working, you will see an error message:

[2020-02-14T05:58:10Z ERROR kanidm_unixd_status] Error -> Os { code: 111, kind: ConnectionRefused, message: "Connection refused" }

For more, see troubleshooting.


When the daemon is running you can add the nsswitch libraries to /etc/nsswitch.conf

passwd: compat kanidm
group: compat kanidm

You can create a user then enable posix feature on the user.

You can then test that the posix extended user is able to be resolved with:

$ getent passwd <account name>
$ getent passwd testunix

You can also do the same for groups.

$ getent group <group name>
$ getent group testgroup

HINT Remember to also create unix password with something like kanidm account posix set_password --name idm_admin demo_user. Otherwise there will be no credential for the account to authenticate.


WARNING: Modifications to PAM configuration may leave your system in a state where you are unable to login or authenticate. You should always have a recovery shell open while making changes (ie root), or have access to single-user mode at the machine's console.

PAM (Pluggable Authentication Modules) is how a unix-like system allows users to authenticate and be authorised to start interactive sessions. This is configured through a stack of modules that are executed in order to evaluate the request. This is done through a series of steps where each module may request or reuse authentication token information.

Before you start

You should backup your /etc/pam.d directory from its original state as you may change the PAM config in a way that will cause you to be unable to authenticate to your machine.

cp -a /etc/pam.d /root/pam.d.backup


To configure PAM on suse you must module four files:


Each of these controls one of the four stages of PAM. The content should look like:

# /etc/pam.d/common-account-pc
account    [default=1 ignore=ignore success=ok]
account    required
account    required ignore_unknown_user

# /etc/pam.d/common-auth-pc
auth        required
auth        [default=1 ignore=ignore success=ok]
auth        sufficient nullok try_first_pass
auth        requisite uid >= 1000 quiet_success
auth        sufficient ignore_unknown_user
auth        required

# /etc/pam.d/common-password-pc
password    requisite
password    [default=1 ignore=ignore success=ok]
password    required use_authtok nullok shadow try_first_pass
password    required

# /etc/pam.d/common-session-pc
session optional
session required
session optional try_first_pass
session optional
session optional
session optional

WARNING: Ensure that pam_mkhomedir or pam_oddjobd are not present in your pam configuration, as they interfere with the correct operation of the kanidm tasks daemon.

Fedora 33

WARNING: Kanidm currently has no support for SELinux policy - this may mean you need to run the daemon with permissive mode for the unconfined_service_t daemon type. To do this run: semanage permissive -a unconfined_service_t. To undo this run semanage permissive -d unconfined_service_t.

You may also need to run audit2allow for sshd and other types to be able to access the unix daemon sockets.

These files are managed by authselect as symlinks. You will need to remove the symlinks first, then edit the content.

# /etc/pam.d/password-auth
auth        required                           
auth        required                            delay=2000000
auth        [default=1 ignore=ignore success=ok] isregular
auth        [default=1 ignore=ignore success=ok]
auth        sufficient                          nullok try_first_pass
auth        [default=1 ignore=ignore success=ok] isregular
auth        sufficient                          debug ignore_unknown_user
auth        required                           

account     sufficient                         
account     sufficient                         
account     sufficient                          issystem
account     sufficient                          debug ignore_unknown_user
account     required                           

password    requisite                           try_first_pass local_users_only
password    sufficient                          sha512 shadow nullok try_first_pass use_authtok
password    sufficient                          debug
password    required                           

session     optional                            revoke
session     required                           
-session    optional                           
session     [success=1 default=ignore]          service in crond quiet use_uid
session     required                           
session     optional                            debug
# /etc/pam.d/system-auth
auth        required                           
auth        required                            delay=2000000
auth        sufficient                         
auth        [default=1 ignore=ignore success=ok] isregular
auth        [default=1 ignore=ignore success=ok]
auth        sufficient                          nullok try_first_pass
auth        [default=1 ignore=ignore success=ok] isregular
auth        sufficient                          debug ignore_unknown_user
auth        required                           

account     sufficient                         
account     sufficient                         
account     sufficient                          issystem
account     sufficient                          debug ignore_unknown_user
account     required                           

password    requisite                           try_first_pass local_users_only
password    sufficient                          sha512 shadow nullok try_first_pass use_authtok
password    sufficient                          debug
password    required                           

session     optional                            revoke
session     required                           
-session    optional                           
session     [success=1 default=ignore]          service in crond quiet use_uid
session     required                           
session     optional                            debug


Check POSIX-status of group and config

If authentication is failing via PAM, make sure that a list of groups is configured in /etc/kanidm/unixd:

pam_allowed_login_groups = ["example_group"]

Check the status of the group with kanidm group posix show example_group. If you get something similar to the below:

> kanidm group posix show example_group
Using cached token for name idm_admin
Error -> Http(500, Some(InvalidAccountState("Missing class: account && posixaccount OR group && posixgroup")), "b71f137e-39f3-4368-9e58-21d26671ae24")

POSIX-enable the group with kanidm group posix set example_group. You should get a result similar to this when you search for your group name:

> kanidm group posix show example_group
[ spn:, gidnumber: 3443347205 name: example_group, uuid: b71f137e-39f3-4368-9e58-21d26671ae24 ]

Also, ensure the target user is in the group by running:

>  kanidm group list_members example_group

Increase logging

For the unixd daemon, you can increase the logging with:

systemctl edit kanidm-unixd.service

And add the lines:


Then restart the kanidm-unixd.service.

The same pattern is true for the kanidm-unixd-tasks.service daemon.

To debug the pam module interactions add debug to the module arguments such as:

auth sufficient debug

Check the socket permissions

Check that the /var/run/kanidm-unixd/sock is 777, and that non-root readers can see it with ls or other tools.

Ensure that /var/run/kanidm-unixd/task_sock is 700, and that it is owned by the kanidm unixd process user.

Check you can access the kanidm server

You can check this with the client tools:

kanidm self whoami --name anonymous

Ensure the libraries are correct.

You should have:


The exact path may change depending on your distribution, should be co-located with so looking for it findable with:

find /usr/ -name ''

For example, on a Debian machine, it's located in /usr/lib/x86_64-linux-gnu/security/.

Increase connection timeout

In some high latency environments, you may need to increase the connection timeout. We set this low to improve response on LANs, but over the internet this may need to be increased. By increasing the conn timeout, you will be able to operate on higher latency links, but some operations may take longer to complete causing a degree of latency.

By increasing the cache_timeout, you will need to refresh "less" but it may mean on an account lockout or group change, that you need to wait until cache_timeout to see the effect (this has security implications)

# /etc/kanidm/unixd
# Seconds
conn_timeout = 8
# Cache timeout
cache_timeout = 60

Invalidate the cache

You can invalidate the kanidm_unixd cache with:

$ kanidm_cache_invalidate

You can clear (wipe) the cache with:

$ kanidm_cache_clear

There is an important distinction between these two - invalidated cache items may still be yielded to a client request if the communication to the main Kanidm server is not possible. For example, you may have your laptop in a park without wifi.

Clearing the cache, however, completely wipes all local data about all accounts and groups. If you are relying on this cached (but invalid data) you may lose access to your accounts until other communication issues have been resolved.


RADIUS is a network protocol that is commonly used to allow wifi devices or VPN's to authenticate users to a network boundary. While it should not be a sole point of trust/authentication to an identity, it's still an important control for improving barriers to attackers access to network resources.

Kanidm has a philosophy that each account can have multiple credentials which are related to their devices and limited to specific resources. RADIUS is no exception and has a separate credential for each account to use for RADIUS access.


It's worth noting some disclaimers about Kanidm's RADIUS integration here

One Credential - One Account

Kanidm normally attempts to have credentials for each device and application rather than the legacy model of one to one.

The RADIUS protocol is only able to attest a single credential in an authentication attempt, which limits us to storing a single RADIUS credential per account. However despite this limitation, it still greatly improves the situation by isolating the RADIUS credential from the primary or application credentials of the account. This solves many common security concerns around credential loss or disclosure and prevents rogue devices from locking out accounts as they attempt to authenticate to wifi with expired credentials.

Cleartext Credential Storage

RADIUS offers many different types of tunnels and authentication mechanisms. However, most client devices "out of the box" only attempt a single type when you select a WPA2-Enterprise network: MSCHAPv2 with PEAP. This is a challenge-response protocol that requires cleartext or NTLM credentials.

As MSCHAPv2 with PEAP is the only practical, universal RADIUS type supported on all devices with "minimal" configuration, we consider it imperative that it MUST be supported as the default. Esoteric RADIUS types can be used as well, but this is up to administrators to test and configure.

Due to this requirement, we must store the RADIUS material as cleartext or NTLM hashes. It would be silly to think that NTLM is "secure" as it's md4 which is only an illusion of security.

This means Kanidm stores RADIUS credentials in the database as cleartext.

We believe this is a reasonable decision and is a low risk to security as:

  • The access controls around RADIUS secrets by default are "strong", limited to only self-account read and RADIUS-server read.
  • As RADIUS credentials are separate from the primary account credentials and have no other rights, their disclosure is not going to lead to a full account compromise.
  • Having the credentials in cleartext allows a better user experience as clients can view the credentials at any time to enrol further devices.

Account Credential Configuration

For an account to use RADIUS they must first generate a RADIUS secret unique to that account. By default, all accounts can self-create this secret.

kanidm account radius generate_secret --name william william
kanidm account radius show_secret --name william william

Account group configuration

Kanidm enforces that accounts which can authenticate to RADIUS must be a member of an allowed group. This allows you to define which users or groups may use wifi or VPN infrastructure and gives a path for "revoking" access to the resources through group management. The key point of this is that service accounts should not be part of this group.

kanidm group create --name idm_admin radius_access_allowed
kanidm group add_members --name idm_admin radius_access_allowed william

RADIUS Server Service Account

To read these secrets, the RADIUS server requires an account with the correct privileges. This can be created and assigned through the group "idm_radius_servers" which is provided by default.

kanidm account create --name admin radius_service_account "Radius Service Account"
kanidm group add_members --name admin idm_radius_servers radius_service_account
kanidm account credential reset_credential --name admin radius_service_account

Deploying a RADIUS Container

We provide a RADIUS container that has all the needed integrations. This container requires some cryptographic material, laid out in a volume like so:

data/ca.pem             # This is the kanidm ca.pem
data/config.ini         # This is the kanidm-radius configuration.
data/certs/dh           # openssl dhparam -out ./dh 2048
data/certs/key.pem      # These are the radius ca/cert/key

The config.ini has the following template:

url =                   # URL to the kanidm server
strict = false          # Strict CA verification
ca = /data/ca.pem       # Path to the kanidm ca
user =                  # Username of the RADIUS service account
secret =                # Generated secret for the service account

; default VLANs for groups that don't specify one.
vlan = 1

; [group.test]          # group.<name> will have these options applied
; vlan =

ca =                    # Path to the radius server's CA
key =                   # Path to the radius servers key
cert =                  # Path to the radius servers cert
dh =                    # Path to the radius servers dh params
required_group =        # Name of a kanidm group which you must be 
                        # A member of to use radius.
cache_path =            # A path to an area where cached user records can be stored.
                        # If in doubt, use /dev/shm/kanidmradiusd

; [client.localhost]    # client.<nas name> configures wifi/vpn consumers
; ipaddr =              # ipv4 or ipv6 address of the NAS
; secret =              # Shared secret

A fully configured example is:

; be sure to check the listening port is correct, it's the docker internal port
; not the external one if these containers are on the same host.
url = https://<kanidmd container name or ip>:8443
strict = true           # Adjust this if you have CA validation issues
ca = /data/ca.crt
user = radius_service_account
secret =                # The generated password from above

; default vlans for groups that don't specify one.
vlan = 1

vlan = 10

ca = /data/certs/ca.pem
key =  /data/certs/key.pem
cert = /data/certs/cert.pem
dh = /data/certs/dh
required_group = radius_access_allowed
cache_path = /dev/shm/kanidmradiusd

ipaddr =
secret = testing123

ipaddr =
secret = testing123

You can then run the container with:

docker run --name radiusd -v ...:/data kanidm/radius:latest

Authentication can be tested through the client.localhost NAS configuration with:

docker exec -i -t radiusd radtest <username> badpassword 10 testing123
docker exec -i -t radiusd radtest <username> <radius show_secret value here> 10 testing123

Finally, to expose this to a wifi infrastructure, add your NAS in config.ini:

ipaddr = <some ipadd>
secret = <random value>

And re-create/run your docker instance with -p 1812:1812 -p 1812:1812/udp ...

If you have any issues, check the logs from the radius output they tend to indicate the cause of the problem. To increase the logging you can re-run your environment with debug enabled:

docker rm radiusd
docker run --name radiusd -e DEBUG=True -i -t -v ...:/data kanidm/radius:latest

Note the radius container is configured to provide Tunnel-Private-Group-ID so if you wish to use wifi assigned VLANs on your infrastructure, you can assign these by groups in the config.ini as shown in the above examples.


While many applications can support systems like SAML or OAuth, many do not. LDAP has been the "lingua franca" of authentication for many years, with almost every application in the world being able to search and bind to LDAP. As there are still many of these in the world, Kanidm can host a read-only LDAP interface.

WARNING The LDAP server in Kanidm is not RFC compliant. This is intentional, as Kanidm wants to cover the common use case (simple bind and search).

What is LDAP

LDAP is a protocol to read data from a directory of information. It is not a server, but a way to communicate to a server. There are many famous LDAP implementations such as Active Directory, 389 Directory Server, DSEE, FreeIPA and many others. Because it is a standard, applications can use an LDAP client library to authenticate users to LDAP, given "one account" for many applications - an IDM just like Kanidm!

Data Mapping

Kanidm is not able to be mapped 100% to LDAP's objects. This is because LDAP types are simple key-values on objects which are all UTF8 strings (or subsets thereof) based on validation (matching) rules. Kanidm internally implements complex data types such as tagging on SSH keys, or multi-value credentials. These can not be represented in LDAP.

As well many of the structures in Kanidm don't correlate closely to LDAP. For example Kanidm only has a gidnumber, where LDAP's schema's define uidnumber and gidnumber.

Entries in the database also have a specific name in LDAP, related to their path in the directory tree. Kanidm is a flat model, so we have to emulate some tree-like elements, and ignore others.

For this reason, when you search the LDAP interface, Kanidm will make some mapping decisions.

  • The domain_info object becomes the suffix root.
  • All other entries are direct subordinates of the domain_info for DN purposes
  • DN's are generated from the attributes naming attributes
  • Bind DN's can be remapped and rewritten, and may not even be a DN during bind.
  • The Kanidm domain name is used to generate the basedn.
  • The '*' and '+' operators can not be used in conjuction with attribute lists in searches.

These decisions were made to make the path as simple and effective as possible, relying more on the Kanidm query and filter system than attempting to generate a tree-like representation of data. As almost all clients can use filters for entry selection we don't believe this is a limitation for consuming applications.



StartTLS is not supported due to security risks. LDAPS is the only secure method of communicating to any LDAP server. Kanidm if configured with certificates will use them for LDAPS (and will not listen on a plaintext LDAP port). If no certificates exist Kanidm will listen on a plaintext LDAP port, and you MUST TLS terminate in front of the Kanidm system to secure data and authentication.

Access Controls

LDAP only supports password authentication. As LDAP is used heavily in posix environments the LDAP bind for any DN will use its configured posix password.

As the posix password is not equivalent in strength to the primary credentials of Kanidm (which may be MFA), the LDAP bind does not grant rights to elevated read permissions. All binds have the permissions of "Anonymous" (even if the anonymous account is locked).

Server Configuration

To configure Kanidm to provide LDAP you add the argument to the server.toml configuration:

ldapbindaddress = ""

You should configure TLS certificates and keys as usual - LDAP will re-use the webserver TLS material.

Showing LDAP entries and attribute maps

By default Kanidm is limited in what attributes are generated or remaped into LDAP entries. However, the server internally contains a map of extended attribute mappings for application specific requests that must be satisfied.

An example is that some applications expect and require a 'CN' value, even though Kanidm does not provide it. If the application is unable to be configured to accept "name" it may be necessary to use Kanidm's mapping feature. Today these are compiled into the server so you may need to open an issue with your requirements.

To show what attribute maps exists for an entry you can use the attribute search term '+'.

# To show Kanidm attributes
ldapsearch ... -x '(name=admin)' '*'
# To show all attribute maps
ldapsearch ... -x '(name=admin)' '+'

Attributes that are in the map, can be requested explicitly, and this can be combined with requesting kanidm native attributes.

ldapsearch ... -x '(name=admin)' cn objectClass displayname memberof


Given a default install with domain "" the configured LDAP DN will be "dc=example,dc=com". This can be queried with:

cargo run -- server -D kanidm.db -C ca.pem -c cert.pem -k key.pem -b -l
> LDAPTLS_CACERT=ca.pem ldapsearch -H ldaps:// -b 'dc=example,dc=com' -x '(name=test1)'

objectclass: account
objectclass: memberof
objectclass: object
objectclass: person
displayname: Test User
name: test1
entryuuid: 22a65b6c-80c8-4e1a-9b76-3f3afdff8400

It is recommended that client applications filter accounts that can login with '(class=account)' and groups with '(class=group)'. If possible, group membership is defined in rfc2307bis or Active Directory style. This means groups are determined from the "memberof" attribute which contains a DN to a group.

LDAP binds can use any unique identifier of the account. The following are all valid bind DN's for the object listed above (if it was a posix account that is).

ldapwhoami ... -x -D 'name=test1'
ldapwhoami ... -x -D ''
ldapwhoami ... -x -D ''
ldapwhoami ... -x -D 'test1'
ldapwhoami ... -x -D '22a65b6c-80c8-4e1a-9b76-3f3afdff8400'
ldapwhoami ... -x -D ',dc=example,dc=com'
ldapwhoami ... -x -D 'name=test1,dc=example,dc=com'

Most LDAP clients are very picky about TLS, and can be very hard to debug or display errors. For example these commands:

ldapsearch -H ldaps:// -b 'dc=example,dc=com' -x '(name=test1)'
ldapsearch -H ldap:// -b 'dc=example,dc=com' -x '(name=test1)'
ldapsearch -H ldap:// -b 'dc=example,dc=com' -x '(name=test1)'

All give the same error:

ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)

This is despite the fact:

  • The first command is a certificate validation error
  • The second is a missing LDAPS on a TLS port
  • The third is an incorrect port

To diagnose errors like this, you may need to add "-d 1" to your LDAP commands or client.

Why TLS?

You may have noticed that Kanidm requires you to configure TLS in your container - or that you provide something with TLS in front like haproxy.

This is due to a single setting on the server - secure_cookies

What are Secure Cookies?

secure-cookies is a flag set in cookies that "asks" a client to transmit them back to the origin site if and only if https is present in the URL.

CA verification is not checked - you can use invalid, out of date certificates, or even certificates where the subjectAltName does not match, but the client must see https:// as the destination else it will not send the cookies.

How does that affect Kanidm?

Kanidm's authentication system is a stepped challenge response design, where you initially request an "intent" to authenticate. Once you establish this intent, the server sets up a session-id into a cookie, and informs the client of what authentication methods can proceed.

When you then go to continue the authentication, if you do NOT have a https url, the cookie with the session-id is not transmitted. The server detects this as an invalid-state request in the authentication design and immediately disconnects you from attempting to continue the authentication as you may be using an insecure channel.

Simply put, we are trying to use settings like secure_cookies to add constraints to the server so that you must perform and adhere to best practices - such as having TLS present on your communication channels.

Getting Started (for Developers)


See the designs folder, and compile the private documentation locally:

cargo doc --document-private-items --open --no-deps

Minimum Supported Rust Version

The project is expected to work on MSRV of 1.47.0.



You will need rustup to install a rust toolchain.

If you plan to work on the web-ui, you may also need npm for setting up some parts.

brew install npm


You will need rustup to install a rust toolchain.

You will also need some system libraries to build this:

libudev-devel sqlite3-devel libopenssl-devel npm-default

Get involved

To get started, you'll need to fork or branch, and we'll merge based on PR's.

If you are a contributor to the project, simply clone:

git clone

If you are forking, then Fork in github and clone with:

git clone
cd kanidm
git remote add myfork<YOUR USERNAME>/kanidm.git

Select an issue (always feel free to reach out to us for advice!), and create a branch to start working:

git branch <feature-branch-name>
git checkout <feature-branch-name>
cargo test

When you are ready for review (even if the feature isn't complete and you just want some advice)

cargo test
git commit -m 'Commit message' ...
git push <myfork/origin> <feature-branch-name>

If you get advice or make changes, just keep commiting to the branch, and pushing to your branch. When we are happy with the code, we'll merge in github, meaning you can now clean up your branch.

git checkout master
git pull
git branch -D <feature-branch-name>


If you are asked to rebase your change, follow these steps:

git checkout master
git pull
git checkout <feature-branch-name>
git rebase master

Then be sure to fix any merge issues or other comments as they arise. If you have issues, you can always stop and reset with:

git rebase --abort

Development Server Quickstart for Interactive Testing

After getting the code, you will need a rust environment. Please investigate rustup for your platform to establish this.

Once you have the source code, you need certificates to use with the server. I recommend using let's encrypt, but if this is not possible, please use our insecure cert tool. Without certificates authentication will fail.

mkdir insecure
cd insecure

You can now build and run the server with the commands below. It will use a database in /tmp/kanidm.db

cd kanidmd
cargo run -- recover_account -c ./server.toml -n admin
cargo run -- server -c ./server.toml

In a new terminal, you can now build and run the client tools with:

cd kanidm_tools
cargo run -- --help
cargo run -- login -H https://localhost:8443 -D anonymous -C ../insecure/ca.pem
cargo run -- self whoami -H https://localhost:8443 -D anonymous -C ../insecure/ca.pem
cargo run -- login -H https://localhost:8443 -D admin -C ../insecure/ca.pem
cargo run -- self whoami -H https://localhost:8443 -D admin -C ../insecure/ca.pem

Building the Web UI

NOTE: There is a pre-packaged version of the Web UI at /kanidmd_web_ui/pkg/, which can be used directly. This means you don't need to build the Web UI yourself

The web UI uses rust wasm rather than javascript. To build this you need to set up the environment.

cargo install wasm-pack
npm install --global rollup

Then you are able to build the UI.

cd kanidmd_web_ui/

The "developer" profile for kanidmd will automatically use the pkg output in this folder.

Setting different developer profiles while building is done by setting the environment variable KANIDM_BUILD_PROFILE to one of the bare filename of the TOML files in /profiles. For example: KANIDM_BUILD_PROFILE=release_suse_generic cargo build --release --bin kanidmd