2021-09-14 16:37:35

by Roberto Sassu

[permalink] [raw]
Subject: [PATCH v3 00/13] integrity: Introduce DIGLIM

Status
======

This version of the patch set implements the suggestions received for
version 2. Apart from one patch added for the IMA API and few fixes, there
are no substantial changes. It has been tested on: x86_64, UML (x86_64),
s390x (big endian).

The long term goal is to boot a system with appraisal enabled and with
DIGLIM as repository for reference values, taken from the RPM database.

Changes required:
- new execution policies in IMA
(https://lore.kernel.org/linux-integrity/[email protected]/)
- support for the euid policy keyword for critical data
(https://lore.kernel.org/linux-integrity/[email protected]/)
- basic DIGLIM
(this patch set)
- additional DIGLIM features (loader, LSM, user space utilities)
- support for DIGLIM in IMA
- support for PGP keys and signatures
(from David Howells)
- support for PGP appended signatures in IMA


Introduction
============

Digest Lists Integrity Module (DIGLIM) is a component of the integrity
subsystem in the kernel, primarily aiming to aid Integrity Measurement
Architecture (IMA) in the process of checking the integrity of file
content and metadata. It accomplishes this task by storing reference
values coming from software vendors and by reporting whether or not the
digest of file content or metadata calculated by IMA (or EVM) is found
among those values. In this way, IMA can decide, depending on the result
of a query, if a measurement should be taken or access to the file
should be granted. The Security Assumptions section explains more in
detail why this component has been placed in the kernel.

The main benefits of using IMA in conjunction with DIGLIM are the
ability to implement advanced remote attestation schemes based on the
usage of a TPM key for establishing a TLS secure channel[1][2], and to
reduce the burden on Linux distribution vendors to extend secure boot at
OS level to applications.

DIGLIM does not have the complexity of feature-rich databases. In fact,
its main functionality comes from the hash table primitives already in
the kernel. It does not have an ad-hoc storage module, it just indexes
data in a fixed format (digest lists, a set of concatenated digests
preceded by a header), copied to kernel memory as they are. Lastly, it
does not support database-oriented languages such as SQL, but only
accepts a digest and its algorithm as a query.

The only digest list format supported by DIGLIM is called compact.
However, Linux distribution vendors don't have to generate new digest
lists in this format for the packages they release, as already available
information, such as RPM headers and DEB package metadata, can be used
as a source for reference values (they include file digests), with a
user space parser taking care of the conversion to the compact format.

Although one might perceive that storing file or metadata digests for a
Linux distribution would significantly increase the memory usage, this
does not seem to be the case. As an anticipation of the evaluation done
in the Preliminary Performance Evaluation section, protecting binaries
and shared libraries of a minimal Fedora 33 installation requires 208K
of memory for the digest lists plus 556K for indexing.

In exchange for a slightly increased memory usage, DIGLIM improves the
performance of the integrity subsystem. In the considered scenario, IMA
measurement and appraisal of 5896 files with digest lists requires
respectively less than one quarter and less than half the time, compared
to the current solution.

DIGLIM also keeps track of whether digest lists have been processed in
some way (e.g. measured or appraised by IMA). This is important for
example for remote attestation, so that remote verifiers understand what
has been uploaded to the kernel.

Operations in DIGLIM are atomic: if an error occurs during the addition
of a digest list, DIGLIM rolls back the entire insert operation;
deletions instead always succeed. This capability has been tested with
an ad-hoc fault injection mechanism capable of simulating failures
during the operations.

Finally, DIGLIM exposes to user space, through securityfs, the digest
lists currently loaded, the number of digests added, a query interface
and an interface to set digest list labels.


Binary Integrity

Integrity is a fundamental security property in information systems.
Integrity could be described as the condition in which a generic
component is just after it has been released by the entity that created
it.

One way to check whether a component is in this condition (called binary
integrity) is to calculate its digest and to compare it with a reference
value (i.e. the digest calculated in controlled conditions, when the
component is released).

IMA, a software part of the integrity subsystem, can perform such
evaluation and execute different actions:

- store the digest in an integrity-protected measurement list, so that
it can be sent to a remote verifier for analysis;
- compare the calculated digest with a reference value (usually
protected with a signature) and deny operations if the file is found
corrupted;
- store the digest in the system log.


Benefits

DIGLIM further enhances the capabilities offered by IMA-based solutions
and, at the same time, makes them more practical to adopt by reusing
existing sources as reference values for integrity decisions.

Possible sources for digest lists are:

- RPM headers;
- Debian repository metadata.

Benefits for IMA Measurement

One of the issues that arises when files are measured by the OS is that,
due to parallel execution, the order in which file accesses happen
cannot be predicted. Since the TPM Platform Configuration Register (PCR)
extend operation, executed after each file measurement,
cryptographically binds the current measurement to the previous ones,
the PCR value at the end of a workload cannot be predicted too.

Thus, even if the usage of a TPM key, bound to a PCR value, should be
allowed when only good files were accessed, the TPM could unexpectedly
deny an operation on that key if files accesses did not happen as stated
by the key policy (which allows only one of the possible sequences).

DIGLIM solves this issue by making the PCR value stable over the time
and not dependent on file accesses. The following figure depicts the
current and the new approaches:

IMA measurement list (current)

entry# 1st boot 2nd boot 3rd boot
+----+---------------+ +----+---------------+ +----+---------------+
1: | 10 | file1 measur. | | 10 | file3 measur. | | 10 | file2 measur. |
+----+---------------+ +----+---------------+ +----+---------------+
2: | 10 | file2 measur. | | 10 | file2 measur. | | 10 | file3 measur. |
+----+---------------+ +----+---------------+ +----+---------------+
3: | 10 | file3 measur. | | 10 | file1 measur. | | 10 | file4 measur. |
+----+---------------+ +----+---------------+ +----+---------------+

PCR: Extend != Extend != Extend
file1, file2, file3 file3, file2, file1 file2, file3, file4


PCR Extend definition:

PCR(new value) = Hash(Hash(meas. entry), PCR(previous value))

A new entry in the measurement list is created by IMA for each file
access. Assuming that file1, file2 and file3 are files provided by the
software vendor, file4 is an unknown file, the first two PCR values
above represent a good system state, the third a bad system state. The
PCR values are the result of the PCR extend operation performed for each
measurement entry with the digest of the measurement entry as an input.

IMA measurement list (with DIGLIM)

dlist
+--------------+
| header |
+--------------+
| file1 digest |
| file2 digest |
| file3 digest |
+--------------+

dlist is a digest list containing the digest of file1, file2 and file3.
In the intended scenario, it is generated by a software vendor at the
end of the building process, and retrieved by the administrator of the
system where the digest list is loaded.

entry# 1st boot 2nd boot 3rd boot
+----+---------------+ +----+---------------+ +----+---------------+
0: | 11 | dlist measur. | | 11 | dlist measur. | | 11 | dlist measur. |
+----+---------------+ +----+---------------+ +----+---------------+
1: < file1 measur. skip > < file3 measur. skip > < file2 measur. skip >

2: < file2 measur. skip > < file2 measur. skip > < file3 measur. skip >
+----+---------------+
3: < file3 measur. skip > < file1 measur. skip > | 11 | file4 measur. |
+----+---------------+

PCR: Extend = Extend != Extend
dlist dlist dlist, file4

The first entry in the measurement list contains the digest of the
digest list uploaded to the kernel at kernel initialization time.

When a file is accessed, IMA queries DIGLIM with the calculated file
digest and, if it is found, IMA skips the measurement.

Thus, the only information sent to remote verifiers are: the list of
files that could possibly be accessed (from the digest list), but not if
they were accessed and when; the measurement of unknown files.

Despite providing less information, this solution has the advantage that
the good system state (i.e. when only file1, file2 and file3 are
accessed) now can be represented with a deterministic PCR value (the PCR
is extended only with the measurement of the digest list). Also, the bad
system state can still be distinguished from the good state (the PCR is
extended also with the measurement of file4).

If a TPM key is bound to the good PCR value, the TPM would allow the key
to be used if file1, file2 or file3 are accessed, regardless of the
sequence in which they are accessed (the PCR value does not change), and
would revoke the permission when the unknown file4 is accessed (the PCR
value changes). If a system is able to establish a TLS connection with a
peer, this implicitly means that the system was in a good state (i.e.
file4 was not accessed, otherwise the TPM would have denied the usage of
the TPM key due to the key policy).

Benefits for IMA Appraisal

Extending secure boot to applications means being able to verify the
provenance of files accessed. IMA does it by verifying file signatures
with a key that it trusts, which requires Linux distribution vendors to
additionally include in the package header a signature for each file
that must be verified (there is the dedicated RPMTAG_FILESIGNATURES
section in the RPM header).

The proposed approach would be instead to verify data provenance from
already available metadata (file digests) in existing packages. IMA
would verify the signature of package metadata and search file digests
extracted from package metadata and added to the hash table in the
kernel.

For RPMs, file digests can be found in the RPMTAG_FILEDIGESTS section of
RPMTAG_IMMUTABLE, whose signature is in RPMTAG_RSAHEADER. For DEBs, file
digests (unsafe to use due to a weak digest algorithm) can be found in
the md5sum file, which can be indirectly verified from Release.gpg.

The following figure highlights the differences between the current and
the proposed approach.

IMA appraisal (current solution, with file signatures):

appraise
+-----------+
V |
+-------------------------+-----+ +-------+-----+ |
| RPM header | | ima rpm | file1 | sig | |
| ... | | plugin +-------+-----+ +-----+
| file1 sig [to be added] | sig |--------> ... | IMA |
| ... | | +-------+-----+ +-----+
| fileN sig [to be added] | | | fileN | sig |
+-------------------------+-----+ +-------+-----+

In this case, file signatures must be added to the RPM header, so that
the ima rpm plugin can extract them together with the file content. The
RPM header signature is not used.

IMA appraisal (with DIGLIM):

kernel hash table
with RPM header content
+---+ +--------------+
| |--->| file1 digest |
+---+ +--------------+
...
+---+ appraise (file1)
| | <--------------+
+----------------+-----+ +---+ |
| RPM header | | ^ |
| ... | | digest_list | |
| file1 digest | sig | rpm plugin | +-------+ +-----+
| ... | |-------------+--->| file1 | | IMA |
| fileN digest | | +-------+ +-----+
+----------------+-----+ |
^ |
+------------------------------------+
appraise (RPM header)

In this case, the RPM header is used as it is, and its signature is used
for IMA appraisal. Then, the digest_list rpm plugin executes the user
space parser to parse the RPM header and add the extracted digests to an
hash table in the kernel. IMA appraisal of the files in the RPM package
consists in searching their digest in the hash table.

Other than reusing available information as digest list, another
advantage is the lower computational overhead compared to the solution
with file signatures (only one signature verification for many files and
digest lookup, instead of per file signature verification, see
Preliminary Performance Evaluation for more details).


Lifecycle

The lifecycle of DIGLIM is represented in the following figure:

Vendor premises (release process with modifications):

+------------+ +-----------------------+ +------------------------+
| 1. build a | | 2. generate and sign | | 3. publish the package |
| package |-->| a digest list from |-->| and digest list in |
| | | packaged files | | a repository |
+------------+ +-----------------------+ +------------------------+
|
|
User premises: |
V
+---------------------+ +------------------------+ +-----------------+
| 6. use digest lists | | 5. download the digest | | 4. download and |
| for measurement |<--| list and upload to |<--| install the |
| and/or appraisal | | the kernel | | package |
+---------------------+ +------------------------+ +-----------------+

The figure above represents all the steps when a digest list is
generated separately. However, as mentioned in Benefits, in most cases
existing packages can be already used as a source for digest lists,
limiting the effort for software vendors.

If, for example, RPMs are used as a source for digest lists, the figure
above becomes:

Vendor premises (release process without modifications):

+------------+ +------------------------+
| 1. build a | | 2. publish the package |
| package |-->| in a repository |---------------------+
| | | | |
+------------+ +------------------------+ |
|
|
User premises: |
V
+---------------------+ +------------------------+ +-----------------+
| 5. use digest lists | | 4. extract digest list | | 3. download and |
| for measurement |<--| from the package |<--| install the |
| and/or appraisal | | and upload to the | | package |
| | | kernel | | |
+---------------------+ +------------------------+ +-----------------+

Step 4 can be performed with the digest_list rpm plugin and the user
space parser, without changes to rpm itself.


Security Assumptions

As mentioned in the Introduction, DIGLIM will be primarily used in
conjunction with IMA to enforce a mandatory policy on all user space
processes, including those owned by root. Even root, in a system with a
locked-down kernel, cannot affect the enforcement of the mandatory
policy or, if changes are permitted, it cannot do so without being
detected.

Given that the target of the enforcement are user space processes,
DIGLIM cannot be placed in the target, as a Mandatory Access Control
(MAC) design is required to have the components responsible to enforce
the mandatory policy separated from the target.

While locking-down a system and limiting actions with a mandatory policy
is generally perceived by users as an obstacle, it has noteworthy
benefits for the users themselves.

First, it would timely block attempts by malicious software to steal or
misuse user assets. Although users could query the package managers to
detect them, detection would happen after the fact, or it wouldn't
happen at all if the malicious software tampered with package managers.
With a mandatory policy enforced by the kernel, users would still be
able to decide which software they want to be executed except that,
unlike package managers, the kernel is not affected by user space
processes or root.

Second, it might make systems more easily verifiable from outside, due
to the limited actions the system allows. When users connect to a
server, not only they would be able to verify the server identity, which
is already possible with communication protocols like TLS, but also if
the software running on that server can be trusted to handle their
sensitive data.


Adoption

A former version of DIGLIM is used in the following OSes:

- openEuler 20.09
https://github.com/openeuler-mirror/kernel/tree/openEuler-20.09
- openEuler 21.03
https://github.com/openeuler-mirror/kernel/tree/openEuler-21.03

Originally, DIGLIM was part of IMA (known as IMA Digest Lists). In this
version, it has been redesigned as a standalone module with an API that
makes its functionality accessible by IMA and, eventually, other
subsystems.


User Space Support

Digest lists can be generated and managed with digest-list-tools:

https://github.com/openeuler-mirror/digest-list-tools

It includes two main applications:

- gen_digest_lists: generates digest lists from files in the
filesystem or from the RPM database (more digest list sources can be
supported);
- manage_digest_lists: converts and uploads digest lists to the
kernel.

Integration with rpm is done with the digest_list plugin:

https://gitee.com/src-openeuler/rpm/blob/master/Add-digest-list-plugin.patch

This plugin writes the RPM header and its signature to a file, so that
the file is ready to be appraised by IMA, and calls the user space
parser to convert and upload the digest list to the kernel.


Simple Usage Example (Tested with Fedora 33)

1. Digest list generation (RPM headers and their signature are copied
to the specified directory):

# mkdir /etc/digest_lists
# gen_digest_lists -t file -f rpm+db -d /etc/digest_lists -o add

2. Digest list upload with the user space parser:

# manage_digest_lists -p add-digest -d /etc/digest_lists

3. First digest list query:

# echo sha256-$(sha256sum /bin/cat) > /sys/kernel/security/integrity/diglim/digest_query
# cat /sys/kernel/security/integrity/diglim/digest_query
sha256-[...]-0-file_list-rpm-coreutils-8.32-18.fc33.x86_64 (actions: 0): version: 1, algo: sha256, type: 2, modifiers: 1, count: 106, datalen: 3392

4. Second digest list query:

# echo sha256-$(sha256sum /bin/zip) > /sys/kernel/security/integrity/diglim/digest_query
# cat /sys/kernel/security/integrity/diglim/digest_query
sha256-[...]-0-file_list-rpm-zip-3.0-27.fc33.x86_64 (actions: 0): version: 1, algo: sha256, type: 2, modifiers: 1, count: 4, datalen: 128


Preliminary Performance Evaluation

This section provides an initial estimation of the overhead introduced
by DIGLIM. The estimation has been performed on a Fedora 33 virtual
machine with 1447 packages installed. The virtual machine has 16 vCPU
(host CPU: AMD Ryzen Threadripper PRO 3955WX 16-Cores) and 2G of RAM
(host memory: 64G). The virtual machine also has a vTPM with libtpms and
swtpm as backend.

After writing the RPM headers to files, the size of the directory
containing them is 36M.

After converting the RPM headers to the compact digest list, the size of
the data being uploaded to the kernel is 3.6M.

The time to load the entire RPM database is 0.628s.

After loading the digest lists to the kernel, the slab usage due to
indexing is (obtained with slab_nomerge in the kernel command line):

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
118144 118144 100% 0,03K 923 128 3692K digest_list_item_ref_cache
102400 102400 100% 0,03K 800 128 3200K digest_item_cache
2646 2646 100% 0,09K 63 42 252K digest_list_item_cache

The stats, obtained from the digests_count interface, introduced later,
are:

Parser digests: 0
File digests: 99100
Metadata digests: 0
Digest list digests: 1423

On this installation, this would be the worst case in which all files
are measured and/or appraised, which is currently not recommended
without enforcing an integrity policy protecting mutable files. Infoflow
LSM is a component to accomplish this task:

https://patchwork.kernel.org/project/linux-integrity/cover/[email protected]/

The first manageable goal of IMA with DIGLIM is to use an execution
policy, with measurement and/or appraisal of files executed or mapped in
memory as executable (in addition to kernel modules and firmware). In
this case, the digest list contains the digest only for those files. The
numbers above change as follows.

After converting the RPM headers to the compact digest list, the size of
the data being uploaded to the kernel is 208K.

The time to load the digest of binaries and shared libraries is 0.062s.

After loading the digest lists to the kernel, the slab usage due to
indexing is:

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
7168 7168 100% 0,03K 56 128 224K digest_list_item_ref_cache
7168 7168 100% 0,03K 56 128 224K digest_item_cache
1134 1134 100% 0,09K 27 42 108K digest_list_item_cache

The stats, obtained from the digests_count interface, are:

Parser digests: 0
File digests: 5986
Metadata digests: 0
Digest list digests: 1104

Comparison with IMA

This section compares the performance between the current solution for
IMA measurement and appraisal, and IMA with DIGLIM.

Workload A (without DIGLIM):

1. cat file[0-5985] > /dev/null

Workload B (with DIGLIM):

1. echo $PWD/0-file_list-compact-file[0-1103] >
<securityfs>/integrity/diglim/digest_list_add
2. cat file[0-5985] > /dev/null

Workload A execution time without IMA policy:

real 0m0,155s
user 0m0,008s
sys 0m0,066s

Measurement

IMA policy:

measure fowner=2000 func=FILE_CHECK mask=MAY_READ use_diglim=allow pcr=11 ima_template=ima-sig

use_diglim is a policy keyword not yet supported by IMA.

Workload A execution time with IMA and 5986 files with signature
measured:

real 0m8,273s
user 0m0,008s
sys 0m2,537s

Workload B execution time with IMA, 1104 digest lists with signature
measured and uploaded to the kernel, and 5986 files with signature
accessed but not measured (due to the file digest being found in the
hash table):

real 0m1,837s
user 0m0,036s
sys 0m0,583s

Appraisal

IMA policy:

appraise fowner=2000 func=FILE_CHECK mask=MAY_READ use_diglim=allow

use_diglim is a policy keyword not yet supported by IMA.

Workload A execution time with IMA and 5986 files with file signature
appraised:

real 0m2,197s
user 0m0,011s
sys 0m2,022s

Workload B execution time with IMA, 1104 digest lists with signature
appraised and uploaded to the kernel, and with 5986 files with signature
not verified (due to the file digest being found in the hash table):

real 0m0,982s
user 0m0,020s
sys 0m0,865s

[1] LSS EU 2019 slides and video

[2] FutureTPM EU project, final review meeting demo slides and video

v2:
- fix documentation content and style issues (suggested by Mauro)
- fix basic definitions description and ensure that the _reserved field of
compact list headers is zero (suggested by Greg KH)
- document the static inline functions to access compact list data
(suggested by Mauro)
- rename htable global variable to diglim_htable (suggested by Mauro)
- add IMA API to retrieve integrity information about a file or buffer
- display the digest list in the original format (same endianness as when
it was uploaded)
- support digest lists with appended signature (for IMA appraisal)
- fix bugs in the tests
- allocate the digest list label in digest_list_add()
- rename digest_label interface to digest_list_label
- check input for digest_query and digest_list_label interfaces
- don't remove entries in digest_lists_loaded if the same digest list is
uploaded again to the kernel
- deny write access to the digest lists while IMA actions are retrieved
- add new test digest_list_add_del_test_file_upload_measured_chown
- remove unused COMPACT_KEY type

v1:
- remove 'ima: Add digest, algo, measured parameters to
ima_measure_critical_data()', replaced by:
https://lore.kernel.org/linux-integrity/[email protected]/
- add 'Lifecycle' subsection to better clarify how digest lists are
generated and used (suggested by Greg KH)
- remove 'Possible Usages' subsection and add 'Benefits for IMA
Measurement' and 'Benefits for IMA Appraisal' subsubsections
- add 'Preliminary Performance Evaluation' subsection
- declare digest_offset and hdr_offset in the digest_list_item_ref
structure as u32 (sufficient for digest lists of 4G) to make room for a
list_head structure (digest_list_item_ref size: 32)
- implement digest list reference management with a linked list instead of
an array
- reorder structure members for better alignment (suggested by Mauro)
- rename digest_lookup() to __digest_lookup() (suggested by Mauro)
- introduce an object cache for each defined structure
- replace atomic_long_t with unsigned long in h_table structure definition
(suggested by Greg KH)
- remove GPL2 license text and file names (suggested by Greg KH)
- ensure that the _reserved field of compact_list_hdr is equal to zero
(suggested by Greg KH)
- dynamically allocate the buffer in digest_lists_show_htable_len() to
avoid frame size warning (reported by kernel test robot, dynamic
allocation suggested by Mauro)
- split documentation in multiple files and reference the source code
(suggested by Mauro)
- use #ifdef in include/linux/diglim.h
- improve generation of event name for IMA measurements
- add new patch to introduce the 'Remote Attestation' section in the
documentation
- fix assignment of actions variable in digest_list_read() and
digest_list_write()
- always release dentry reference when digest_list_get_secfs_files() is
called
- rewrite add/del and query interfaces to take advantage of m->private
- prevent deletion of a digest list only if there are actions done at
addition time that are not currently being performed
- fix doc warnings (replace Returns with Return:)
- perform queries of digest list digests in the existing tests
- add new tests: digest_list_add_del_test_file_upload_measured,
digest_list_check_measurement_list_test_file_upload and
digest_list_check_measurement_list_test_buffer_upload
- don't return a value from digest_del(), digest_list_ref_del, and
digest_list_del()
- improve Makefile for tests

Roberto Sassu (13):
diglim: Overview
diglim: Basic definitions
diglim: Objects
diglim: Methods
diglim: Parser
diglim: IMA info
diglim: Interfaces - digest_list_add, digest_list_del
diglim: Interfaces - digest_lists_loaded
diglim: Interfaces - digest_list_label
diglim: Interfaces - digest_query
diglim: Interfaces - digests_count
diglim: Remote Attestation
diglim: Tests

.../security/diglim/architecture.rst | 46 +
.../security/diglim/implementation.rst | 228 +++
Documentation/security/diglim/index.rst | 14 +
.../security/diglim/introduction.rst | 599 +++++++
.../security/diglim/remote_attestation.rst | 87 +
Documentation/security/diglim/tests.rst | 70 +
Documentation/security/index.rst | 1 +
MAINTAINERS | 20 +
include/linux/diglim.h | 28 +
include/linux/kernel_read_file.h | 1 +
include/uapi/linux/diglim.h | 51 +
security/integrity/Kconfig | 1 +
security/integrity/Makefile | 1 +
security/integrity/diglim/Kconfig | 11 +
security/integrity/diglim/Makefile | 8 +
security/integrity/diglim/diglim.h | 232 +++
security/integrity/diglim/fs.c | 865 ++++++++++
security/integrity/diglim/ima.c | 122 ++
security/integrity/diglim/methods.c | 513 ++++++
security/integrity/diglim/parser.c | 274 ++++
security/integrity/integrity.h | 4 +
tools/testing/selftests/Makefile | 1 +
tools/testing/selftests/diglim/Makefile | 19 +
tools/testing/selftests/diglim/common.c | 135 ++
tools/testing/selftests/diglim/common.h | 32 +
tools/testing/selftests/diglim/config | 3 +
tools/testing/selftests/diglim/selftest.c | 1442 +++++++++++++++++
27 files changed, 4808 insertions(+)
create mode 100644 Documentation/security/diglim/architecture.rst
create mode 100644 Documentation/security/diglim/implementation.rst
create mode 100644 Documentation/security/diglim/index.rst
create mode 100644 Documentation/security/diglim/introduction.rst
create mode 100644 Documentation/security/diglim/remote_attestation.rst
create mode 100644 Documentation/security/diglim/tests.rst
create mode 100644 include/linux/diglim.h
create mode 100644 include/uapi/linux/diglim.h
create mode 100644 security/integrity/diglim/Kconfig
create mode 100644 security/integrity/diglim/Makefile
create mode 100644 security/integrity/diglim/diglim.h
create mode 100644 security/integrity/diglim/fs.c
create mode 100644 security/integrity/diglim/ima.c
create mode 100644 security/integrity/diglim/methods.c
create mode 100644 security/integrity/diglim/parser.c
create mode 100644 tools/testing/selftests/diglim/Makefile
create mode 100644 tools/testing/selftests/diglim/common.c
create mode 100644 tools/testing/selftests/diglim/common.h
create mode 100644 tools/testing/selftests/diglim/config
create mode 100644 tools/testing/selftests/diglim/selftest.c

--
2.25.1


2021-09-14 16:37:51

by Roberto Sassu

[permalink] [raw]
Subject: [PATCH v3 05/13] diglim: Parser

Introduce the necessary functions to parse a digest list and to execute the
requested operation.

The main function is digest_list_parse(), which coordinates the various
steps required to add or delete a digest list, and has the logic to roll
back when one of the steps fails.

A more detailed description about the steps can be found in
Documentation/security/diglim/implementation.rst

Signed-off-by: Roberto Sassu <[email protected]>
---
.../security/diglim/implementation.rst | 35 +++
MAINTAINERS | 1 +
security/integrity/diglim/Makefile | 2 +-
security/integrity/diglim/diglim.h | 3 +
security/integrity/diglim/parser.c | 274 ++++++++++++++++++
5 files changed, 314 insertions(+), 1 deletion(-)
create mode 100644 security/integrity/diglim/parser.c

diff --git a/Documentation/security/diglim/implementation.rst b/Documentation/security/diglim/implementation.rst
index 83342ec12f74..626af0d245ef 100644
--- a/Documentation/security/diglim/implementation.rst
+++ b/Documentation/security/diglim/implementation.rst
@@ -182,3 +182,38 @@ This section introduces the methods requires to manage the three objects
defined.

.. kernel-doc:: security/integrity/diglim/methods.c
+
+
+Parser
+------
+
+This section introduces the necessary functions to parse a digest list and
+to execute the requested operation.
+
+.. kernel-doc:: security/integrity/diglim/parser.c
+
+The main function is digest_list_parse(), which coordinates the various
+steps required to add or delete a digest list, and has the logic to roll
+back when one of the steps fails.
+
+#. Calls digest_list_validate() to validate the passed buffer containing
+ the digest list to ensure that the format is correct.
+
+#. Calls get_digest_list() to create a new digest_list_item for the add
+ operation, or to retrieve the existing one for the delete operation.
+ get_digest_list() refuses to add digest lists that were previously
+ added and to delete digest lists that weren't previously added. Also,
+ get_digest_list() refuses to delete digest lists if there are actions
+ done at addition time that are not currently being performed (it would
+ guarantee that also deletion is notified to remote verifiers).
+
+#. Calls _digest_list_parse() which takes the created/retrieved
+ struct digest_list_item and adds or delete the digests included in the
+ digest list.
+
+#. If an error occurred, performs a rollback to the previous state, by
+ calling _digest_list_parse() with the opposite operation and the buffer
+ size at the time the error occurred.
+
+#. digest_list_parse() deletes the struct digest_list_item on unsuccessful
+ add or successful delete.
diff --git a/MAINTAINERS b/MAINTAINERS
index 06c6ba0b3f25..f5959936d490 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -5515,6 +5515,7 @@ F: include/linux/diglim.h
F: include/uapi/linux/diglim.h
F: security/integrity/diglim/diglim.h
F: security/integrity/diglim/methods.c
+F: security/integrity/diglim/parser.c

DIOLAN U2C-12 I2C DRIVER
M: Guenter Roeck <[email protected]>
diff --git a/security/integrity/diglim/Makefile b/security/integrity/diglim/Makefile
index b761ed8cfb3e..34e4e154fff3 100644
--- a/security/integrity/diglim/Makefile
+++ b/security/integrity/diglim/Makefile
@@ -5,4 +5,4 @@

obj-$(CONFIG_DIGLIM) += diglim.o

-diglim-y := methods.o
+diglim-y := methods.o parser.o
diff --git a/security/integrity/diglim/diglim.h b/security/integrity/diglim/diglim.h
index 75359f9cd3dd..afdb0affdc5e 100644
--- a/security/integrity/diglim/diglim.h
+++ b/security/integrity/diglim/diglim.h
@@ -218,4 +218,7 @@ struct digest_item *digest_list_add(u8 *digest, enum hash_algo algo,
const char *label);
void digest_list_del(u8 *digest, enum hash_algo algo, u8 actions,
struct digest_list_item *digest_list);
+
+int digest_list_parse(loff_t size, void *buf, enum ops op, u8 actions,
+ u8 *digest, enum hash_algo algo, const char *label);
#endif /*__DIGLIM_INTERNAL_H*/
diff --git a/security/integrity/diglim/parser.c b/security/integrity/diglim/parser.c
new file mode 100644
index 000000000000..435d231028c7
--- /dev/null
+++ b/security/integrity/diglim/parser.c
@@ -0,0 +1,274 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2005,2006,2007,2008 IBM Corporation
+ * Copyright (C) 2017-2021 Huawei Technologies Duesseldorf GmbH
+ *
+ * Author: Roberto Sassu <[email protected]>
+ *
+ * Functions to parse digest lists.
+ */
+
+#include <linux/vmalloc.h>
+#include <linux/module.h>
+
+#include "diglim.h"
+#include "../integrity.h"
+
+/**
+ * digest_list_validate - validate format of digest list
+ * @size: buffer size
+ * @buf: buffer containing the digest list
+ *
+ * This function validates the format of the passed digest list.
+ *
+ * Return: 0 if the digest list was successfully validated, -EINVAL otherwise.
+ */
+static int digest_list_validate(loff_t size, void *buf)
+{
+ void *bufp = buf, *bufendp = buf + size;
+ struct compact_list_hdr *hdr;
+ size_t digest_len;
+
+ while (bufp < bufendp) {
+ if (bufp + sizeof(*hdr) > bufendp) {
+ pr_err("insufficient data\n");
+ return -EINVAL;
+ }
+
+ hdr = bufp;
+
+ if (hdr->version != 1) {
+ pr_err("unsupported version\n");
+ return -EINVAL;
+ }
+
+ if (hdr->_reserved != 0) {
+ pr_err("unexpected value for _reserved field\n");
+ return -EINVAL;
+ }
+
+ hdr->type = le16_to_cpu(hdr->type);
+ hdr->modifiers = le16_to_cpu(hdr->modifiers);
+ hdr->algo = le16_to_cpu(hdr->algo);
+ hdr->count = le32_to_cpu(hdr->count);
+ hdr->datalen = le32_to_cpu(hdr->datalen);
+
+ if (hdr->algo >= HASH_ALGO__LAST) {
+ pr_err("invalid hash algorithm\n");
+ return -EINVAL;
+ }
+
+ digest_len = hash_digest_size[hdr->algo];
+
+ if (hdr->type >= COMPACT__LAST ||
+ hdr->type == COMPACT_DIGEST_LIST) {
+ pr_err("invalid type %d\n", hdr->type);
+ return -EINVAL;
+ }
+
+ bufp += sizeof(*hdr);
+
+ if (hdr->datalen != hdr->count * digest_len ||
+ bufp + hdr->datalen > bufendp) {
+ pr_err("invalid data\n");
+ return -EINVAL;
+ }
+
+ bufp += hdr->count * digest_len;
+ }
+
+ return 0;
+}
+
+/**
+ * _digest_list_parse - parse digest list and add/delete digests
+ * @size: buffer size
+ * @buf: buffer containing the digest list
+ * @op: operation to be performed
+ * @digest_list: digest list digests being added/deleted belong to
+ *
+ * This function parses the digest list and adds or delete the digests in the
+ * found digest blocks.
+ *
+ * Return: the buffer size if all digests were successfully added or deleted,
+ * the size of the already parsed buffer on error.
+ */
+static int _digest_list_parse(loff_t size, void *buf, enum ops op,
+ struct digest_list_item *digest_list)
+{
+ void *bufp = buf, *bufendp = buf + size;
+ struct compact_list_hdr *hdr;
+ struct digest_item *d = ERR_PTR(-EINVAL);
+ size_t digest_len;
+ int i;
+
+ while (bufp < bufendp) {
+ if (bufp + sizeof(*hdr) > bufendp)
+ break;
+
+ hdr = bufp;
+ bufp += sizeof(*hdr);
+
+ digest_len = hash_digest_size[hdr->algo];
+
+ for (i = 0; i < hdr->count && bufp + digest_len <= bufendp;
+ i++, bufp += digest_len) {
+ switch (op) {
+ case DIGEST_LIST_ADD:
+ d = digest_add(bufp, hdr->algo, hdr->type,
+ digest_list, bufp - buf,
+ (void *)hdr - buf);
+ if (IS_ERR(d)) {
+ pr_err(
+ "failed to add a digest from %s\n",
+ digest_list->label);
+ goto out;
+ }
+
+ break;
+ case DIGEST_LIST_DEL:
+ digest_del(bufp, hdr->algo, hdr->type,
+ digest_list, bufp - buf,
+ (void *)hdr - buf);
+ break;
+ default:
+ break;
+ }
+ }
+ }
+out:
+ return bufp - buf;
+}
+
+/**
+ * get_digest_list - get the digest list extracted digests will be associated to
+ * @size: buffer size
+ * @buf: buffer containing the digest list
+ * @op: digest list operation
+ * @actions: actions performed on the digest list being processed
+ * @digest: digest of the digest list
+ * @algo: digest algorithm
+ * @label: label to identify the digest list (e.g. file name)
+ *
+ * This function retrieves the digest list item for the passed digest and
+ * algorithm. If it is not found at addition time, this function creates a new
+ * one.
+ *
+ * This function prevents the imbalance of digests (references left after
+ * delete) by ensuring that only digest lists that were previously added can be
+ * deleted.
+ *
+ * This function also ensures that the actions done at the time of addition are
+ * also performed at the time of deletion (it would guarantee that also deletion
+ * is notified to remote verifiers).
+ *
+ * Return: the retrieved/created digest list item on success, an error pointer
+ * otherwise.
+ */
+static struct digest_list_item *get_digest_list(loff_t size, void *buf,
+ enum ops op, u8 actions,
+ u8 *digest, enum hash_algo algo,
+ const char *label)
+{
+ struct digest_item *d;
+ struct digest_list_item *digest_list;
+ int digest_len = hash_digest_size[algo];
+
+ switch (op) {
+ case DIGEST_LIST_ADD:
+ /* Add digest list to be associated to each digest. */
+ d = digest_list_add(digest, algo, size, buf, actions, label);
+ if (IS_ERR(d))
+ return (void *)d;
+
+ digest_list = list_first_entry(&d->refs,
+ struct digest_list_item_ref, list)->digest_list;
+ break;
+ case DIGEST_LIST_DEL:
+ /* Lookup digest list to delete the references. */
+ d = __digest_lookup(digest, algo, COMPACT_DIGEST_LIST, NULL,
+ NULL);
+ if (!d) {
+ print_hex_dump(KERN_ERR,
+ "digest list digest not found: ",
+ DUMP_PREFIX_NONE, digest_len, 1, digest,
+ digest_len, true);
+ return ERR_PTR(-ENOENT);
+ }
+
+ digest_list = list_first_entry(&d->refs,
+ struct digest_list_item_ref, list)->digest_list;
+
+ /*
+ * Reject deletion if there are actions done at addition time
+ * that are currently not being performed.
+ */
+ if ((digest_list->actions & actions) != digest_list->actions) {
+ pr_err("missing actions, add: %d, del: %d\n",
+ digest_list->actions, actions);
+ return ERR_PTR(-EPERM);
+ }
+
+ break;
+ default:
+ return ERR_PTR(-EINVAL);
+ }
+
+ return digest_list;
+}
+
+/**
+ * digest_list_parse - parse a digest list
+ * @size: buffer size
+ * @buf: buffer containing the digest list
+ * @op: digest list operation
+ * @actions: actions performed on the digest list being processed
+ * @digest: digest of the digest list
+ * @algo: digest algorithm
+ * @label: label to identify the digest list (e.g. file name)
+ *
+ * This function parses the passed digest list and executed the requested
+ * operation. If the operation cannot be successfully executed, this function
+ * performs a rollback to the previous state.
+ *
+ * Return: the buffer size on success, a negative value otherwise.
+ */
+int digest_list_parse(loff_t size, void *buf, enum ops op, u8 actions,
+ u8 *digest, enum hash_algo algo, const char *label)
+{
+ struct digest_list_item *digest_list;
+ enum ops rollback_op = (op == DIGEST_LIST_ADD) ?
+ DIGEST_LIST_DEL : DIGEST_LIST_ADD;
+ int ret, rollback_size;
+
+ ret = digest_list_validate(size, buf);
+ if (ret < 0)
+ return ret;
+
+ digest_list = get_digest_list(size, buf, op, actions, digest, algo,
+ label);
+ if (IS_ERR(digest_list))
+ return PTR_ERR(digest_list);
+
+ ret = _digest_list_parse(size, buf, op, digest_list);
+ if (ret < 0)
+ goto out;
+
+ if (ret != size) {
+ rollback_size = ret;
+
+ ret = _digest_list_parse(rollback_size, buf, rollback_op,
+ digest_list);
+ if (ret != rollback_size)
+ pr_err("rollback failed\n");
+
+ ret = -EINVAL;
+ }
+out:
+ /* Delete digest list on unsuccessful add or successful delete. */
+ if ((op == DIGEST_LIST_ADD && ret < 0) ||
+ (op == DIGEST_LIST_DEL && ret == size))
+ digest_list_del(digest, algo, actions, digest_list);
+
+ return ret;
+}
--
2.25.1

2021-09-14 16:38:07

by Roberto Sassu

[permalink] [raw]
Subject: [PATCH v3 06/13] diglim: IMA info

Introduce diglim_ima_get_info() to retrieve the digest and the actions
performed by IMA on the passed digest list file or buffer.

diglim_ima_get_info() requires the caller to write-lock the file to ensure
that the file content the integrity status is retrieved from didn't change
since the time the buffer passed as argument was filled.

Signed-off-by: Roberto Sassu <[email protected]>
---
MAINTAINERS | 1 +
security/integrity/diglim/Makefile | 2 +-
security/integrity/diglim/diglim.h | 6 ++
security/integrity/diglim/ima.c | 122 +++++++++++++++++++++++++++++
security/integrity/integrity.h | 4 +
5 files changed, 134 insertions(+), 1 deletion(-)
create mode 100644 security/integrity/diglim/ima.c

diff --git a/MAINTAINERS b/MAINTAINERS
index f5959936d490..f10690dda734 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -5514,6 +5514,7 @@ F: Documentation/security/diglim/introduction.rst
F: include/linux/diglim.h
F: include/uapi/linux/diglim.h
F: security/integrity/diglim/diglim.h
+F: security/integrity/diglim/ima.c
F: security/integrity/diglim/methods.c
F: security/integrity/diglim/parser.c

diff --git a/security/integrity/diglim/Makefile b/security/integrity/diglim/Makefile
index 34e4e154fff3..880dc5300792 100644
--- a/security/integrity/diglim/Makefile
+++ b/security/integrity/diglim/Makefile
@@ -5,4 +5,4 @@

obj-$(CONFIG_DIGLIM) += diglim.o

-diglim-y := methods.o parser.o
+diglim-y := methods.o parser.o ima.o
diff --git a/security/integrity/diglim/diglim.h b/security/integrity/diglim/diglim.h
index afdb0affdc5e..ebe8936520b5 100644
--- a/security/integrity/diglim/diglim.h
+++ b/security/integrity/diglim/diglim.h
@@ -22,6 +22,8 @@
#include <linux/hash_info.h>
#include <linux/diglim.h>

+#include "../integrity.h"
+
#define MAX_DIGEST_SIZE 64
#define HASH_BITS 10
#define DIGLIM_HTABLE_SIZE (1 << HASH_BITS)
@@ -221,4 +223,8 @@ void digest_list_del(u8 *digest, enum hash_algo algo, u8 actions,

int digest_list_parse(loff_t size, void *buf, enum ops op, u8 actions,
u8 *digest, enum hash_algo algo, const char *label);
+
+int diglim_ima_get_info(struct file *file, u8 *buffer, size_t buffer_len,
+ char *event_name, u8 *digest, size_t digest_len,
+ enum hash_algo *algo, u8 *actions);
#endif /*__DIGLIM_INTERNAL_H*/
diff --git a/security/integrity/diglim/ima.c b/security/integrity/diglim/ima.c
new file mode 100644
index 000000000000..2cc1ec1299f8
--- /dev/null
+++ b/security/integrity/diglim/ima.c
@@ -0,0 +1,122 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Huawei Technologies Duesseldorf GmbH
+ *
+ * Author: Roberto Sassu <[email protected]>
+ *
+ * Functions to retrieve the integrity status from IMA.
+ */
+
+#include <linux/vmalloc.h>
+#include <linux/module.h>
+#include <linux/ima.h>
+
+#include "diglim.h"
+
+static int diglim_ima_get_info_file(struct file *file, u8 *digest,
+ size_t digest_len, enum hash_algo *algo,
+ u8 *actions)
+{
+ struct integrity_iint_cache *iint;
+ struct inode *inode = file_inode(file);
+ int ret = -ENOENT;
+
+ iint = integrity_iint_find(inode);
+ if (!iint)
+ return ret;
+
+ mutex_lock(&iint->mutex);
+ /* File digest has not been calculated. */
+ if (!(iint->flags & IMA_COLLECTED))
+ goto out;
+
+ ret = 0;
+
+ if (iint->flags & IMA_MEASURED)
+ *actions |= 1 << COMPACT_ACTION_IMA_MEASURED;
+
+ if (iint->flags & IMA_APPRAISED)
+ *actions |= 1 << COMPACT_ACTION_IMA_APPRAISED;
+
+ if (test_bit(IMA_DIGSIG, &iint->atomic_flags))
+ *actions |= 1 << COMPACT_ACTION_IMA_APPRAISED_DIGSIG;
+
+ if (algo)
+ *algo = iint->ima_hash->algo;
+ if (digest)
+ memcpy(digest, iint->ima_hash->digest, hash_digest_size[*algo]);
+out:
+ mutex_unlock(&iint->mutex);
+ return ret;
+}
+
+static int diglim_ima_get_info_buffer(u8 *buffer, size_t buffer_len,
+ char *event_name, u8 *digest,
+ size_t digest_len, enum hash_algo *algo,
+ u8 *actions)
+{
+ int ret;
+
+ ret = ima_measure_critical_data("diglim", event_name, buffer,
+ buffer_len, false, digest, digest_len);
+ if (ret < 0 && ret != -EEXIST)
+ return -ENOENT;
+
+ *algo = ima_get_current_hash_algo();
+
+ if (!ret || ret == -EEXIST)
+ *actions |= 1 << COMPACT_ACTION_IMA_MEASURED;
+
+ return 0;
+}
+
+/**
+ * diglim_ima_get_info - retrieve the integrity status of digest list from IMA
+ * @file: file to retrieve the integrity status from
+ * @buffer: buffer to retrieve the integrity status from (alternative to file)
+ * @buffer_len: buffer length
+ * @event_name: name of the event to be generated by IMA for buffer measurement
+ * @digest: digest of the file or the buffer
+ * @digest_len: digest length
+ * @algo: digest algorithm
+ * @actions: actions performed on the file or the buffer
+ *
+ * This function attempts to retrieve some information from the passed digest
+ * list file or buffer: the digest, its algorithm, and the actions performed by
+ * IMA.
+ *
+ * This function first attempts to retrieve the information from the file, and
+ * if unsuccessful, attempts with the buffer.
+ *
+ * The caller must prevent writes to the file with deny_write_access() to ensure
+ * that the file content the integrity status is retrieved from didn't change
+ * since the time the buffer passed as argument was filled.
+ *
+ * Return: 0 if the information has been successfully retrieved, -ENOENT
+ * otherwise.
+ */
+int diglim_ima_get_info(struct file *file, u8 *buffer, size_t buffer_len,
+ char *event_name, u8 *digest, size_t digest_len,
+ enum hash_algo *algo, u8 *actions)
+{
+ int ret = -ENOENT;
+
+ /* Ensure that the file is write-locked. */
+ if (file && atomic_read(&file_inode(file)->i_writecount) >= 0)
+ return -EINVAL;
+
+ if (file) {
+ ret = diglim_ima_get_info_file(file, digest, digest_len, algo,
+ actions);
+ if (!ret && (*actions & (1 << COMPACT_ACTION_IMA_MEASURED)))
+ return ret;
+ }
+
+ if (buffer) {
+ ret = diglim_ima_get_info_buffer(buffer, buffer_len, event_name,
+ digest, digest_len, algo,
+ actions);
+ }
+
+ return ret;
+}
diff --git a/security/integrity/integrity.h b/security/integrity/integrity.h
index 74919b638f52..de5dde382f11 100644
--- a/security/integrity/integrity.h
+++ b/security/integrity/integrity.h
@@ -6,6 +6,9 @@
* Mimi Zohar <[email protected]>
*/

+#ifndef __INTEGRITY_H
+#define __INTEGRITY_H
+
#ifdef pr_fmt
#undef pr_fmt
#endif
@@ -285,3 +288,4 @@ static inline void __init add_to_platform_keyring(const char *source,
{
}
#endif
+#endif /*__INTEGRITY_H*/
--
2.25.1

2021-09-14 16:38:42

by Roberto Sassu

[permalink] [raw]
Subject: [PATCH v3 08/13] diglim: Interfaces - digest_lists_loaded

Introduce the digest_lists_loaded directory in
<securityfs>/integrity/diglim.

It contains two files for each loaded digest list: one shows the digest
list in binary format, and the other (with .ascii prefix) shows the digest
list in ASCII format.

Files are added and removed at the same time digest lists are added and
removed.

Signed-off-by: Roberto Sassu <[email protected]>
---
security/integrity/diglim/fs.c | 318 ++++++++++++++++++++++++++++++++-
1 file changed, 315 insertions(+), 3 deletions(-)

diff --git a/security/integrity/diglim/fs.c b/security/integrity/diglim/fs.c
index 5698afd2d18a..4913c1df2918 100644
--- a/security/integrity/diglim/fs.c
+++ b/security/integrity/diglim/fs.c
@@ -26,6 +26,17 @@
#define MAX_DIGEST_LIST_SIZE (64 * 1024 * 1024 - 1)

static struct dentry *diglim_dir;
+/**
+ * DOC: digest_lists_loaded
+ *
+ * digest_lists_loaded is a directory containing two files for each
+ * loaded digest list: one shows the digest list in binary format, and the
+ * other (with .ascii prefix) shows the digest list in ASCII format.
+ *
+ * Files are added and removed at the same time digest lists are added and
+ * removed.
+ */
+static struct dentry *digest_lists_loaded_dir;
/**
* DOC: digest_list_add
*
@@ -48,6 +59,255 @@ static struct dentry *digest_list_add_dentry;
static struct dentry *digest_list_del_dentry;
char digest_list_label[NAME_MAX + 1];

+static int parse_digest_list_filename(const char *digest_list_filename,
+ u8 *digest, enum hash_algo *algo)
+{
+ u8 *sep;
+ int i;
+
+ sep = strchr(digest_list_filename, '-');
+ if (!sep)
+ return -EINVAL;
+
+ *sep = '\0';
+ i = match_string(hash_algo_name, HASH_ALGO__LAST, digest_list_filename);
+ *sep = '-';
+
+ if (i < 0)
+ return -ENOENT;
+
+ *algo = i;
+ return hex2bin(digest, sep + 1, hash_digest_size[*algo]);
+}
+
+/* *pos is the offset of the digest list data to show. */
+static void *digest_list_start(struct seq_file *m, loff_t *pos)
+{
+ struct digest_item *d;
+ u8 digest[IMA_MAX_DIGEST_SIZE];
+ enum hash_algo algo;
+ struct digest_list_item *digest_list;
+ int ret;
+
+ if (m->private) {
+ digest_list = (struct digest_list_item *)m->private;
+
+ if (*pos == digest_list->size)
+ return NULL;
+
+ return digest_list->buf + *pos;
+ }
+
+ ret = parse_digest_list_filename(file_dentry(m->file)->d_name.name,
+ digest, &algo);
+ if (ret < 0)
+ return NULL;
+
+ d = __digest_lookup(digest, algo, COMPACT_DIGEST_LIST, NULL, NULL);
+ if (!d)
+ return NULL;
+
+ digest_list = list_first_entry(&d->refs,
+ struct digest_list_item_ref, list)->digest_list;
+ m->private = digest_list;
+ return digest_list->buf;
+}
+
+static void *digest_list_next(struct seq_file *m, void *v, loff_t *pos)
+{
+ struct compact_list_hdr *hdr;
+ struct digest_list_item *digest_list =
+ (struct digest_list_item *)m->private;
+ void *bufp = digest_list->buf;
+ bool is_header = false;
+
+ /* Determine if v points to a header or a digest. */
+ while (bufp <= v) {
+ hdr = (struct compact_list_hdr *)bufp;
+ if (bufp == v) {
+ is_header = true;
+ break;
+ }
+
+ bufp += sizeof(*hdr) + hdr->datalen;
+ }
+
+ if (is_header)
+ *pos += sizeof(*hdr);
+ else
+ *pos += hash_digest_size[hdr->algo];
+
+ if (*pos == digest_list->size)
+ return NULL;
+
+ return digest_list->buf + *pos;
+}
+
+static void digest_list_stop(struct seq_file *m, void *v)
+{
+}
+
+static void print_digest(struct seq_file *m, u8 *digest, u32 size)
+{
+ u32 i;
+
+ for (i = 0; i < size; i++)
+ seq_printf(m, "%02x", *(digest + i));
+}
+
+static void digest_list_putc(struct seq_file *m, void *data, int datalen)
+{
+ while (datalen--)
+ seq_putc(m, *(char *)data++);
+}
+
+static int digest_list_show_common(struct seq_file *m, void *v, bool binary)
+{
+ struct compact_list_hdr *hdr, hdr_orig;
+ struct digest_list_item *digest_list =
+ (struct digest_list_item *)m->private;
+ void *bufp = digest_list->buf;
+ bool is_header = false;
+
+ /* Determine if v points to a header or a digest. */
+ while (bufp <= v) {
+ hdr = (struct compact_list_hdr *)bufp;
+ if (bufp == v) {
+ is_header = true;
+ break;
+ }
+
+ bufp += sizeof(*hdr) + hdr->datalen;
+ }
+
+ if (is_header) {
+ if (binary) {
+ memcpy(&hdr_orig, v, sizeof(hdr_orig));
+ hdr_orig.type = cpu_to_le16(hdr_orig.type);
+ hdr_orig.modifiers = cpu_to_le16(hdr_orig.modifiers);
+ hdr_orig.algo = cpu_to_le16(hdr_orig.algo);
+ hdr_orig.count = cpu_to_le32(hdr_orig.count);
+ hdr_orig.datalen = cpu_to_le32(hdr_orig.datalen);
+ digest_list_putc(m, &hdr_orig, sizeof(hdr_orig));
+ } else {
+ seq_printf(m,
+ "actions: %d, version: %d, algo: %s, type: %d, modifiers: %d, count: %d, datalen: %d\n",
+ digest_list->actions, hdr->version,
+ hash_algo_name[hdr->algo], hdr->type,
+ hdr->modifiers, hdr->count, hdr->datalen);
+ }
+ return 0;
+ }
+
+ if (binary) {
+ digest_list_putc(m, v, hash_digest_size[hdr->algo]);
+ } else {
+ print_digest(m, v, hash_digest_size[hdr->algo]);
+ seq_puts(m, "\n");
+ }
+
+ return 0;
+}
+
+static int digest_list_show(struct seq_file *m, void *v)
+{
+ return digest_list_show_common(m, v, true);
+}
+
+static int digest_list_ascii_show(struct seq_file *m, void *v)
+{
+ return digest_list_show_common(m, v, false);
+}
+
+static const struct seq_operations digest_list_seqops = {
+ .start = digest_list_start,
+ .next = digest_list_next,
+ .stop = digest_list_stop,
+ .show = digest_list_show
+};
+
+static int digest_list_seq_open(struct inode *inode, struct file *file)
+{
+ return seq_open(file, &digest_list_seqops);
+}
+
+static const struct file_operations digest_list_ops = {
+ .open = digest_list_seq_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
+static const struct seq_operations digest_list_ascii_seqops = {
+ .start = digest_list_start,
+ .next = digest_list_next,
+ .stop = digest_list_stop,
+ .show = digest_list_ascii_show
+};
+
+static int digest_list_ascii_seq_open(struct inode *inode, struct file *file)
+{
+ return seq_open(file, &digest_list_ascii_seqops);
+}
+
+static const struct file_operations digest_list_ascii_ops = {
+ .open = digest_list_ascii_seq_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
+static int digest_list_get_secfs_files(char *label, u8 *digest,
+ enum hash_algo algo, enum ops op,
+ struct dentry **dentry,
+ struct dentry **dentry_ascii)
+{
+ char digest_list_filename[NAME_MAX + 1] = { 0 };
+ u8 digest_str[IMA_MAX_DIGEST_SIZE * 2 + 1] = { 0 };
+ char *dot, *label_ptr;
+
+ label_ptr = strrchr(label, '/');
+ if (label_ptr)
+ label = label_ptr + 1;
+
+ bin2hex(digest_str, digest, hash_digest_size[algo]);
+
+ snprintf(digest_list_filename, sizeof(digest_list_filename),
+ "%s-%s-%s.ascii", hash_algo_name[algo], digest_str, label);
+
+ dot = strrchr(digest_list_filename, '.');
+
+ *dot = '\0';
+ if (op == DIGEST_LIST_ADD)
+ *dentry = securityfs_create_file(digest_list_filename, 0440,
+ digest_lists_loaded_dir, NULL,
+ &digest_list_ops);
+ else
+ *dentry = lookup_positive_unlocked(digest_list_filename,
+ digest_lists_loaded_dir,
+ strlen(digest_list_filename));
+ *dot = '.';
+ if (IS_ERR(*dentry))
+ return PTR_ERR(*dentry);
+
+ if (op == DIGEST_LIST_ADD)
+ *dentry_ascii = securityfs_create_file(digest_list_filename,
+ 0440, digest_lists_loaded_dir,
+ NULL, &digest_list_ascii_ops);
+ else
+ *dentry_ascii = lookup_positive_unlocked(digest_list_filename,
+ digest_lists_loaded_dir,
+ strlen(digest_list_filename));
+ if (IS_ERR(*dentry_ascii)) {
+ if (op == DIGEST_LIST_ADD)
+ securityfs_remove(*dentry);
+
+ return PTR_ERR(*dentry_ascii);
+ }
+
+ return 0;
+}
+
/*
* check_modsig: detect appended signature
*/
@@ -83,6 +343,7 @@ ssize_t digest_list_read(struct path *root, char *path, enum ops op)
char event_name[NAME_MAX + 9 + 1];
u8 digest[IMA_MAX_DIGEST_SIZE] = { 0 };
enum hash_algo algo;
+ struct dentry *dentry, *dentry_ascii;
int rc, pathlen = strlen(path);

/* Remove \n. */
@@ -129,9 +390,30 @@ ssize_t digest_list_read(struct path *root, char *path, enum ops op)
goto out_vfree;
}

- rc = digest_list_parse(size, data, op, actions, digest, algo, "");
+ rc = digest_list_get_secfs_files(path, digest, algo, op, &dentry,
+ &dentry_ascii);
+ if (rc < 0) {
+ pr_err("unable to create securityfs entries for %s (%d)\n",
+ path, rc);
+ goto out_vfree;
+ }
+
+ rc = digest_list_parse(size, data, op, actions, digest, algo,
+ dentry->d_name.name);
if (rc < 0 && rc != -EEXIST)
pr_err("unable to upload digest list %s (%d)\n", path, rc);
+
+ /* Release reference taken in digest_list_get_secfs_files(). */
+ if (op == DIGEST_LIST_DEL) {
+ dput(dentry);
+ dput(dentry_ascii);
+ }
+
+ if ((rc < 0 && rc != -EEXIST && op == DIGEST_LIST_ADD) ||
+ (rc == size && op == DIGEST_LIST_DEL)) {
+ securityfs_remove(dentry);
+ securityfs_remove(dentry_ascii);
+ }
out_vfree:
vfree(data);
out_allow_write:
@@ -155,7 +437,7 @@ static ssize_t digest_list_write(struct file *file, const char __user *buf,
char *digest_list_label_ptr;
ssize_t result;
enum ops op = DIGEST_LIST_ADD;
- struct dentry *dentry = file_dentry(file);
+ struct dentry *dentry = file_dentry(file), *dentry_ascii;
u8 digest[IMA_MAX_DIGEST_SIZE];
char event_name[NAME_MAX + 11 + 1];
enum hash_algo algo;
@@ -201,12 +483,36 @@ static ssize_t digest_list_write(struct file *file, const char __user *buf,
goto out_kfree;
}

+ result = digest_list_get_secfs_files(
+ digest_list_label[0] != '\0' ?
+ digest_list_label : "parser",
+ digest, algo, op, &dentry,
+ &dentry_ascii);
+ if (result < 0) {
+ pr_err("unable to create securityfs entries for buffer (%ld)\n",
+ result);
+ goto out_kfree;
+ }
+
memset(digest_list_label, 0, sizeof(digest_list_label));

result = digest_list_parse(datalen, data, op, actions, digest,
- algo, "");
+ algo, dentry->d_name.name);
if (result < 0 && result != -EEXIST)
pr_err("unable to upload generated digest list\n");
+
+ /* Release reference taken in digest_list_get_secfs_files(). */
+ if (op == DIGEST_LIST_DEL) {
+ dput(dentry);
+ dput(dentry_ascii);
+ }
+
+ if ((result < 0 && result != -EEXIST &&
+ op == DIGEST_LIST_ADD) ||
+ (result == datalen && op == DIGEST_LIST_DEL)) {
+ securityfs_remove(dentry);
+ securityfs_remove(dentry_ascii);
+ }
}
out_kfree:
kfree(data);
@@ -253,6 +559,11 @@ static int __init diglim_fs_init(void)
if (IS_ERR(diglim_dir))
return -1;

+ digest_lists_loaded_dir = securityfs_create_dir("digest_lists_loaded",
+ diglim_dir);
+ if (IS_ERR(digest_lists_loaded_dir))
+ goto out;
+
digest_list_add_dentry = securityfs_create_file("digest_list_add", 0200,
diglim_dir, NULL,
&digest_list_upload_ops);
@@ -269,6 +580,7 @@ static int __init diglim_fs_init(void)
out:
securityfs_remove(digest_list_del_dentry);
securityfs_remove(digest_list_add_dentry);
+ securityfs_remove(digest_lists_loaded_dir);
securityfs_remove(diglim_dir);
return -1;
}
--
2.25.1

2021-09-14 16:39:01

by Roberto Sassu

[permalink] [raw]
Subject: [PATCH v3 09/13] diglim: Interfaces - digest_list_label

Introduce the digest_list_label interface. It can be used to set a label to
be applied to the next digest list (buffer) loaded through digest_list_add.

Signed-off-by: Roberto Sassu <[email protected]>
---
security/integrity/diglim/fs.c | 48 ++++++++++++++++++++++++++++++++++
1 file changed, 48 insertions(+)

diff --git a/security/integrity/diglim/fs.c b/security/integrity/diglim/fs.c
index 4913c1df2918..deeb04f3c42c 100644
--- a/security/integrity/diglim/fs.c
+++ b/security/integrity/diglim/fs.c
@@ -37,6 +37,13 @@ static struct dentry *diglim_dir;
* removed.
*/
static struct dentry *digest_lists_loaded_dir;
+/**
+ * DOC: digest_list_label
+ *
+ * digest_list_label can be used to set a label to be applied to the next digest
+ * list (buffer) loaded through digest_list_add.
+ */
+static struct dentry *digest_list_label_dentry;
/**
* DOC: digest_list_add
*
@@ -553,6 +560,40 @@ static const struct file_operations digest_list_upload_ops = {
.llseek = generic_file_llseek,
};

+/*
+ * digest_list_label_write: write label for next uploaded digest list.
+ */
+static ssize_t digest_list_label_write(struct file *file,
+ const char __user *buf, size_t datalen,
+ loff_t *ppos)
+{
+ int rc, i;
+
+ if (datalen >= sizeof(digest_list_label))
+ return -EINVAL;
+
+ rc = copy_from_user(digest_list_label, buf, datalen);
+ if (rc)
+ return -EFAULT;
+
+ for (i = 0; i < datalen; i++) {
+ if (!isgraph(digest_list_label[i]) &&
+ digest_list_label[i] != '\0') {
+ memset(digest_list_label, 0, sizeof(digest_list_label));
+ return -EINVAL;
+ }
+ }
+
+ return datalen;
+}
+
+static const struct file_operations digest_list_label_ops = {
+ .open = generic_file_open,
+ .write = digest_list_label_write,
+ .read = seq_read,
+ .llseek = generic_file_llseek,
+};
+
static int __init diglim_fs_init(void)
{
diglim_dir = securityfs_create_dir("diglim", integrity_dir);
@@ -576,8 +617,15 @@ static int __init diglim_fs_init(void)
if (IS_ERR(digest_list_del_dentry))
goto out;

+ digest_list_label_dentry = securityfs_create_file("digest_list_label",
+ 0600, diglim_dir, NULL,
+ &digest_list_label_ops);
+ if (IS_ERR(digest_list_label_dentry))
+ goto out;
+
return 0;
out:
+ securityfs_remove(digest_list_label_dentry);
securityfs_remove(digest_list_del_dentry);
securityfs_remove(digest_list_add_dentry);
securityfs_remove(digest_lists_loaded_dir);
--
2.25.1

2021-09-14 16:39:06

by Roberto Sassu

[permalink] [raw]
Subject: [PATCH v3 10/13] diglim: Interfaces - digest_query

Introduce the digest_query interface, which allows to write a query in the
format <algo>-<digest> and to obtain all digest lists that include that
digest.

Signed-off-by: Roberto Sassu <[email protected]>
---
security/integrity/diglim/fs.c | 181 +++++++++++++++++++++++++++++++++
1 file changed, 181 insertions(+)

diff --git a/security/integrity/diglim/fs.c b/security/integrity/diglim/fs.c
index deeb04f3c42c..e383254c72a4 100644
--- a/security/integrity/diglim/fs.c
+++ b/security/integrity/diglim/fs.c
@@ -44,6 +44,13 @@ static struct dentry *digest_lists_loaded_dir;
* list (buffer) loaded through digest_list_add.
*/
static struct dentry *digest_list_label_dentry;
+/**
+ * DOC: digest_query
+ *
+ * digest_query allows to write a query in the format <algo>-<digest> and
+ * to obtain all digest lists that include that digest.
+ */
+static struct dentry *digest_query_dentry;
/**
* DOC: digest_list_add
*
@@ -64,6 +71,7 @@ static struct dentry *digest_list_add_dentry;
* described for digest_list_add.
*/
static struct dentry *digest_list_del_dentry;
+char digest_query[CRYPTO_MAX_ALG_NAME + 1 + IMA_MAX_DIGEST_SIZE * 2 + 1];
char digest_list_label[NAME_MAX + 1];

static int parse_digest_list_filename(const char *digest_list_filename,
@@ -264,6 +272,84 @@ static const struct file_operations digest_list_ascii_ops = {
.release = seq_release,
};

+/*
+ * *pos is the n-th reference to show among all the references in all digest
+ * items found with the query.
+ */
+static void *digest_query_start(struct seq_file *m, loff_t *pos)
+{
+ struct digest_item *d;
+ u8 digest[IMA_MAX_DIGEST_SIZE];
+ enum hash_algo algo;
+ loff_t count = 0;
+ enum compact_types type = 0;
+ struct digest_list_item_ref *ref;
+ int ret;
+
+ ret = parse_digest_list_filename(digest_query, digest, &algo);
+ if (ret < 0)
+ return NULL;
+
+ for (type = 0; type < COMPACT__LAST; type++) {
+ d = __digest_lookup(digest, algo, type, NULL, NULL);
+ if (!d)
+ continue;
+
+ list_for_each_entry(ref, &d->refs, list) {
+ if (count++ == *pos) {
+ m->private = d;
+ return ref;
+ }
+ }
+ }
+
+ return NULL;
+}
+
+static void *digest_query_next(struct seq_file *m, void *v, loff_t *pos)
+{
+ struct digest_item *d = (struct digest_item *)m->private;
+ struct digest_list_item_ref *cur_ref = (struct digest_list_item_ref *)v;
+ struct digest_list_item_ref *ref;
+
+ (*pos)++;
+
+ list_for_each_entry(ref, &d->refs, list) {
+ if (ref != cur_ref)
+ continue;
+
+ if (!list_is_last(&cur_ref->list, &d->refs))
+ return list_next_entry(cur_ref, list);
+ }
+
+ return NULL;
+}
+
+static void digest_query_stop(struct seq_file *m, void *v)
+{
+}
+
+static int digest_query_show(struct seq_file *m, void *v)
+{
+ struct digest_list_item_ref *ref = (struct digest_list_item_ref *)v;
+ struct digest_list_item *digest_list = ref->digest_list;
+ struct compact_list_hdr *hdr = get_hdr_ref(ref);
+
+ if (!ref->digest_offset) {
+ seq_printf(m, "%s (actions: %d): type: %d, size: %lld\n",
+ digest_list->label, digest_list->actions,
+ COMPACT_DIGEST_LIST, digest_list->size);
+ return 0;
+ }
+
+ seq_printf(m,
+ "%s (actions: %d): version: %d, algo: %s, type: %d, modifiers: %d, count: %d, datalen: %d\n",
+ digest_list->label, digest_list->actions, hdr->version,
+ hash_algo_name[hdr->algo], hdr->type, hdr->modifiers,
+ hdr->count, hdr->datalen);
+ return 0;
+}
+
static int digest_list_get_secfs_files(char *label, u8 *digest,
enum hash_algo algo, enum ops op,
struct dentry **dentry,
@@ -594,6 +680,94 @@ static const struct file_operations digest_list_label_ops = {
.llseek = generic_file_llseek,
};

+static const struct seq_operations digest_query_seqops = {
+ .start = digest_query_start,
+ .next = digest_query_next,
+ .stop = digest_query_stop,
+ .show = digest_query_show,
+};
+
+/*
+ * digest_query_open: sequentialize access to the add/del/query files
+ */
+static int digest_query_open(struct inode *inode, struct file *file)
+{
+ if (test_and_set_bit(0, &flags))
+ return -EBUSY;
+
+ if (file->f_flags & O_WRONLY)
+ return 0;
+
+ return seq_open(file, &digest_query_seqops);
+}
+
+/*
+ * digest_query_write: write digest query (<algo>-<digest>).
+ */
+static ssize_t digest_query_write(struct file *file, const char __user *buf,
+ size_t datalen, loff_t *ppos)
+{
+ char *sep;
+ int rc, i;
+
+ if (datalen >= sizeof(digest_query))
+ return -EINVAL;
+
+ rc = copy_from_user(digest_query, buf, datalen);
+ if (rc)
+ return -EFAULT;
+
+ sep = strchr(digest_query, '-');
+ if (!sep) {
+ rc = -EINVAL;
+ goto out;
+ }
+
+ *sep = '\0';
+ i = match_string(hash_algo_name, HASH_ALGO__LAST, digest_query);
+ if (i < 0) {
+ rc = -ENOENT;
+ goto out;
+ }
+
+ *sep = '-';
+
+ for (i = 0; i < hash_digest_size[i] * 2; i++) {
+ if (!isxdigit(sep[i + 1])) {
+ rc = -EINVAL;
+ goto out;
+ }
+ }
+out:
+ if (rc < 0) {
+ memset(digest_query, 0, sizeof(digest_query));
+ return rc;
+ }
+
+ return datalen;
+}
+
+/*
+ * digest_query_release - release the query file
+ */
+static int digest_query_release(struct inode *inode, struct file *file)
+{
+ clear_bit(0, &flags);
+
+ if (file->f_flags & O_WRONLY)
+ return 0;
+
+ return seq_release(inode, file);
+}
+
+static const struct file_operations digest_query_ops = {
+ .open = digest_query_open,
+ .write = digest_query_write,
+ .read = seq_read,
+ .release = digest_query_release,
+ .llseek = generic_file_llseek,
+};
+
static int __init diglim_fs_init(void)
{
diglim_dir = securityfs_create_dir("diglim", integrity_dir);
@@ -623,8 +797,15 @@ static int __init diglim_fs_init(void)
if (IS_ERR(digest_list_label_dentry))
goto out;

+ digest_query_dentry = securityfs_create_file("digest_query", 0600,
+ diglim_dir, NULL,
+ &digest_query_ops);
+ if (IS_ERR(digest_query_dentry))
+ goto out;
+
return 0;
out:
+ securityfs_remove(digest_query_dentry);
securityfs_remove(digest_list_label_dentry);
securityfs_remove(digest_list_del_dentry);
securityfs_remove(digest_list_add_dentry);
--
2.25.1

2021-09-14 16:39:26

by Roberto Sassu

[permalink] [raw]
Subject: [PATCH v3 12/13] diglim: Remote Attestation

Add more information about remote attestation with IMA and DIGLIM in
Documentation/security/diglim/remote_attestation.rst.

Signed-off-by: Roberto Sassu <[email protected]>
---
Documentation/security/diglim/index.rst | 1 +
.../security/diglim/remote_attestation.rst | 87 +++++++++++++++++++
MAINTAINERS | 1 +
3 files changed, 89 insertions(+)
create mode 100644 Documentation/security/diglim/remote_attestation.rst

diff --git a/Documentation/security/diglim/index.rst b/Documentation/security/diglim/index.rst
index 4771134c2f0d..0f28c5ad71c0 100644
--- a/Documentation/security/diglim/index.rst
+++ b/Documentation/security/diglim/index.rst
@@ -10,3 +10,4 @@ Digest Lists Integrity Module (DIGLIM)
introduction
architecture
implementation
+ remote_attestation
diff --git a/Documentation/security/diglim/remote_attestation.rst b/Documentation/security/diglim/remote_attestation.rst
new file mode 100644
index 000000000000..d22d01ce3e40
--- /dev/null
+++ b/Documentation/security/diglim/remote_attestation.rst
@@ -0,0 +1,87 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Remote Attestation
+==================
+
+When a digest list is added or deleted through the ``digest_list_add`` or
+``digest_list_del`` interfaces, the function ``diglim_ima_get_info()`` is
+called to retrieve the integrity status from IMA. This function supports
+two methods: by file, where the integrity information is retrieved from
+the ``integrity_iint_cache`` structure associated to the inode, if found;
+by buffer, where the buffer (directly written to securityfs, or filled from
+a file read by the kernel) is passed to ``ima_measure_critical_data()`` for
+measurement.
+
+For the by file method, existing IMA rules can be used, as long as the
+digest list matches the criteria. For the by buffer method, the following
+rule must be added to the IMA policy::
+
+ measure func=CRITICAL_DATA label=diglim
+
+The second method gives more accurate information, as it creates a
+measurement entry during addition and deletion, while the first method
+creates an entry only during addition.
+
+Below there is an example of usage of the by buffer method.
+
+When a file is uploaded, the workflow and the resulting IMA measurement
+list are:
+
+.. code-block:: bash
+
+ # echo $PWD/0-file_list-compact-cat > /sys/kernel/security/integrity/diglim/digest_list_add
+ # echo $PWD/0-file_list-compact-cat > /sys/kernel/security/integrity/diglim/digest_list_del
+ # cat /sys/kernel/security/integrity/ima/ascii_runtime_measurements
+ ...
+ 10 <template digest> ima-buf sha256:<buffer digest> add_file_0-file_list-compact-cat <buffer>
+ 10 <template digest> ima-buf sha256:<buffer digest> del_file_0-file_list-compact-cat <buffer>
+
+When a buffer is uploaded, the workflow and the resulting IMA measurement
+list are:
+
+.. code-block:: bash
+
+ # echo 0-file_list-compact-cat > /sys/kernel/security/integrity/diglim/digest_list_label
+ # cat 0-file_list-compact-cat > /sys/kernel/security/integrity/diglim/digest_list_add
+ # echo 0-file_list-compact-cat > /sys/kernel/security/integrity/diglim/digest_list_label
+ # cat 0-file_list-compact-cat > /sys/kernel/security/integrity/diglim/digest_list_del
+ # cat /sys/kernel/security/integrity/ima/ascii_runtime_measurements
+ ...
+ 10 <template digest> ima-buf sha256:<buffer digest> add_buffer_0-file_list-compact-cat <buffer>
+ 10 <template digest> ima-buf sha256:<buffer digest> del_buffer_0-file_list-compact-cat <buffer>
+
+In the second case, the digest list label must be set explicitly, as the
+kernel cannot determine it by itself (in the first case it is derived from
+the name of the file uploaded).
+
+The confirmation that the digest list has been processed by IMA can be
+obtained by reading the ASCII representation of the digest list:
+
+.. code-block:: bash
+
+ # cat /sys/kernel/security/integrity/diglim/digest_lists_loaded/sha256-<digest list digest>-0-file_list-compact-cat.ascii
+ actions: 1, version: 1, algo: sha256, type: 2, modifiers: 1, count: 1, datalen: 32
+ 87e5bd81850e11eeec2d3bb696b626b2a7f45673241cbbd64769c83580432869
+
+In this output, ``actions`` is set to 1 (``COMPACT_ACTION_IMA_MEASURED``
+bit set).
+
+
+DIGLIM guarantees that the information reported in the IMA measurement list
+is complete (with the by buffer method). If digest list loading is not
+recorded, digest query results are ignored by IMA. If the addition was
+recorded, deletion can be performed only if also the deletion is recorded.
+This can be seen in the following sequence of commands:
+
+.. code-block:: bash
+
+ # echo 0-file_list-compact-cat > /sys/kernel/security/integrity/diglim/digest_list_label
+ # cat 0-file_list-compact-cat > /sys/kernel/security/integrity/diglim/digest_list_add
+ # echo 0-file_list-compact-cat > /sys/kernel/security/integrity/diglim/digest_list_label
+ # /tmp/cat 0-file_list-compact-cat > /sys/kernel/security/integrity/diglim/digest_list_del
+ diglim: actions mismatch, add: 1, del: 0
+ diglim: unable to upload generated digest list
+ /tmp/cat: write error: Invalid argument
+
+Digest list measurement is avoided with the execution of ``/tmp/cat``, for
+which a dont_measure rule was previously added in the IMA policy.
diff --git a/MAINTAINERS b/MAINTAINERS
index 826f628f4fab..eac82f151d18 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -5511,6 +5511,7 @@ F: Documentation/security/diglim/architecture.rst
F: Documentation/security/diglim/implementation.rst
F: Documentation/security/diglim/index.rst
F: Documentation/security/diglim/introduction.rst
+F: Documentation/security/diglim/remote_attestation.rst
F: include/linux/diglim.h
F: include/uapi/linux/diglim.h
F: security/integrity/diglim/diglim.h
--
2.25.1

2021-09-14 16:41:41

by Roberto Sassu

[permalink] [raw]
Subject: [PATCH v3 11/13] diglim: Interfaces - digests_count

Introduce the digests_count interface, which shows the current number of
items stored in the hash table by type.

Reported-by: kernel test robot <[email protected]> (frame size warning)
Signed-off-by: Roberto Sassu <[email protected]>
---
security/integrity/diglim/fs.c | 48 ++++++++++++++++++++++++++++++++++
1 file changed, 48 insertions(+)

diff --git a/security/integrity/diglim/fs.c b/security/integrity/diglim/fs.c
index e383254c72a4..467ff4f7c0ce 100644
--- a/security/integrity/diglim/fs.c
+++ b/security/integrity/diglim/fs.c
@@ -24,6 +24,7 @@
#include "diglim.h"

#define MAX_DIGEST_LIST_SIZE (64 * 1024 * 1024 - 1)
+#define TMPBUF_SIZE 512

static struct dentry *diglim_dir;
/**
@@ -37,6 +38,13 @@ static struct dentry *diglim_dir;
* removed.
*/
static struct dentry *digest_lists_loaded_dir;
+/**
+ * DOC: digests_count
+ *
+ * digests_count shows the current number of digests stored in the hash
+ * table by type.
+ */
+static struct dentry *digests_count;
/**
* DOC: digest_list_label
*
@@ -74,6 +82,39 @@ static struct dentry *digest_list_del_dentry;
char digest_query[CRYPTO_MAX_ALG_NAME + 1 + IMA_MAX_DIGEST_SIZE * 2 + 1];
char digest_list_label[NAME_MAX + 1];

+static char *types_str[COMPACT__LAST] = {
+ [COMPACT_PARSER] = "Parser",
+ [COMPACT_FILE] = "File",
+ [COMPACT_METADATA] = "Metadata",
+ [COMPACT_DIGEST_LIST] = "Digest list",
+};
+
+static ssize_t diglim_show_htable_len(struct file *filp, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ char *tmpbuf;
+ ssize_t ret, len = 0;
+ int i;
+
+ tmpbuf = kmalloc(TMPBUF_SIZE, GFP_KERNEL);
+ if (!tmpbuf)
+ return -ENOMEM;
+
+ for (i = 0; i < COMPACT__LAST; i++)
+ len += scnprintf(tmpbuf + len, TMPBUF_SIZE - len,
+ "%s digests: %lu\n", types_str[i],
+ diglim_htable[i].len);
+
+ ret = simple_read_from_buffer(buf, count, ppos, tmpbuf, len);
+ kfree(tmpbuf);
+ return ret;
+}
+
+static const struct file_operations htable_len_ops = {
+ .read = diglim_show_htable_len,
+ .llseek = generic_file_llseek,
+};
+
static int parse_digest_list_filename(const char *digest_list_filename,
u8 *digest, enum hash_algo *algo)
{
@@ -779,6 +820,12 @@ static int __init diglim_fs_init(void)
if (IS_ERR(digest_lists_loaded_dir))
goto out;

+ digests_count = securityfs_create_file("digests_count", 0440,
+ diglim_dir, NULL,
+ &htable_len_ops);
+ if (IS_ERR(digests_count))
+ goto out;
+
digest_list_add_dentry = securityfs_create_file("digest_list_add", 0200,
diglim_dir, NULL,
&digest_list_upload_ops);
@@ -809,6 +856,7 @@ static int __init diglim_fs_init(void)
securityfs_remove(digest_list_label_dentry);
securityfs_remove(digest_list_del_dentry);
securityfs_remove(digest_list_add_dentry);
+ securityfs_remove(digests_count);
securityfs_remove(digest_lists_loaded_dir);
securityfs_remove(diglim_dir);
return -1;
--
2.25.1

2021-09-14 16:42:09

by Roberto Sassu

[permalink] [raw]
Subject: [PATCH v3 13/13] diglim: Tests

Introduce a number of tests to ensure that DIGLIM works as expected:

- digest_list_add_del_test_file_upload;
- digest_list_add_del_test_file_upload_fault;
- digest_list_add_del_test_buffer_upload;
- digest_list_add_del_test_buffer_upload_fault;
- digest_list_fuzzing_test;
- digest_list_add_del_test_file_upload_measured;
- digest_list_add_del_test_file_upload_measured_chown;
- digest_list_check_measurement_list_test_file_upload;
- digest_list_check_measurement_list_test_buffer_upload.

The tests are in tools/testing/selftests/diglim/selftest.c.

A description of the tests can be found in
Documentation/security/diglim/tests.rst.

Signed-off-by: Roberto Sassu <[email protected]>
---
Documentation/security/diglim/index.rst | 1 +
Documentation/security/diglim/tests.rst | 70 +
MAINTAINERS | 2 +
tools/testing/selftests/Makefile | 1 +
tools/testing/selftests/diglim/Makefile | 19 +
tools/testing/selftests/diglim/common.c | 135 ++
tools/testing/selftests/diglim/common.h | 32 +
tools/testing/selftests/diglim/config | 3 +
tools/testing/selftests/diglim/selftest.c | 1442 +++++++++++++++++++++
9 files changed, 1705 insertions(+)
create mode 100644 Documentation/security/diglim/tests.rst
create mode 100644 tools/testing/selftests/diglim/Makefile
create mode 100644 tools/testing/selftests/diglim/common.c
create mode 100644 tools/testing/selftests/diglim/common.h
create mode 100644 tools/testing/selftests/diglim/config
create mode 100644 tools/testing/selftests/diglim/selftest.c

diff --git a/Documentation/security/diglim/index.rst b/Documentation/security/diglim/index.rst
index 0f28c5ad71c0..d4ba4ce50a59 100644
--- a/Documentation/security/diglim/index.rst
+++ b/Documentation/security/diglim/index.rst
@@ -11,3 +11,4 @@ Digest Lists Integrity Module (DIGLIM)
architecture
implementation
remote_attestation
+ tests
diff --git a/Documentation/security/diglim/tests.rst b/Documentation/security/diglim/tests.rst
new file mode 100644
index 000000000000..899e7d6683cf
--- /dev/null
+++ b/Documentation/security/diglim/tests.rst
@@ -0,0 +1,70 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Testing
+=======
+
+This section introduces a number of tests to ensure that DIGLIM works as
+expected:
+
+- ``digest_list_add_del_test_file_upload``;
+- ``digest_list_add_del_test_file_upload_fault``;
+- ``digest_list_add_del_test_buffer_upload``;
+- ``digest_list_add_del_test_buffer_upload_fault``;
+- ``digest_list_fuzzing_test``;
+- ``digest_list_add_del_test_file_upload_measured``;
+- ``digest_list_add_del_test_file_upload_measured_chown``;
+- ``digest_list_check_measurement_list_test_file_upload``;
+- ``digest_list_check_measurement_list_test_buffer_upload``.
+
+The tests are in ``tools/testing/selftests/diglim/selftest.c``.
+
+The first four tests randomly perform add, delete and query of digest
+lists. They internally keep track at any time of the digest lists that are
+currently uploaded to the kernel.
+
+Also, digest lists are generated randomly by selecting an arbitrary digest
+algorithm and an arbitrary number of digests. To ensure a good number of
+collisions, digests are a sequence of zeros, except for the first four
+bytes that are set with a random number within a defined range.
+
+When a query operation is selected, a digest is chosen by getting another
+random number within the same range. Then, the tests count how many times
+the digest is found in the internally stored digest lists and in the query
+result obtained from the kernel. The tests are successful if the obtained
+numbers are the same.
+
+The ``file_upload`` variant creates a temporary file from a generated
+digest list and sends its path to the kernel, so that the file is uploaded.
+The ``buffer_upload`` variant directly sends the digest list buffer to the
+kernel (it will be done by the user space parser after it converts a digest
+list not in the compact format).
+
+The ``fault`` variant performs the test by enabling the ad-hoc fault
+injection mechanism in the kernel (accessible through
+``<debugfs>/fail_diglim``). The fault injection mechanism randomly injects
+errors during the addition and deletion of digest lists. When an error
+occurs, the rollback mechanism performs the reverse operation until the
+point the error occurred, so that the kernel is left in the same state as
+when the requested operation began. Since the kernel returns the error to
+user space, the tests also know that the operation didn't succeed and
+behave accordingly (they also revert the internal state).
+
+The fuzzing test simply sends randomly generated digest lists to the
+kernel, to ensure that the parser is robust enough to handle malformed
+data.
+
+The ``measured`` and ``measured_chown`` variants of the
+``digest_list_add_del_test`` series check whether the digest lists actions
+are properly set after adding IMA rules to measure the digest lists. The
+``measured`` variant is expected to match the IMA rule for critical data,
+while the ``measured_chown`` variant is expected to match the IMA rule for
+files with UID 3000.
+
+The ``digest_list_check_measurement_list_test`` tests verify the remote
+attestation functionality. They verify whether IMA creates a measurement
+entry for each addition and deletion of a digest list, and that the
+deletion is forbidden if IMA created a measurement entry only for the
+addition.
+
+The ``file_upload`` variant uploads a file, while the ``buffer_upload``
+variant uploads a buffer.
diff --git a/MAINTAINERS b/MAINTAINERS
index eac82f151d18..033c70014568 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -5512,6 +5512,7 @@ F: Documentation/security/diglim/implementation.rst
F: Documentation/security/diglim/index.rst
F: Documentation/security/diglim/introduction.rst
F: Documentation/security/diglim/remote_attestation.rst
+F: Documentation/security/diglim/tests.rst
F: include/linux/diglim.h
F: include/uapi/linux/diglim.h
F: security/integrity/diglim/diglim.h
@@ -5519,6 +5520,7 @@ F: security/integrity/diglim/fs.c
F: security/integrity/diglim/ima.c
F: security/integrity/diglim/methods.c
F: security/integrity/diglim/parser.c
+F: tools/testing/selftests/diglim/

DIOLAN U2C-12 I2C DRIVER
M: Guenter Roeck <[email protected]>
diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
index c852eb40c4f7..667cd738327b 100644
--- a/tools/testing/selftests/Makefile
+++ b/tools/testing/selftests/Makefile
@@ -8,6 +8,7 @@ TARGETS += clone3
TARGETS += core
TARGETS += cpufreq
TARGETS += cpu-hotplug
+TARGETS += diglim
TARGETS += drivers/dma-buf
TARGETS += efivarfs
TARGETS += exec
diff --git a/tools/testing/selftests/diglim/Makefile b/tools/testing/selftests/diglim/Makefile
new file mode 100644
index 000000000000..100c219955d7
--- /dev/null
+++ b/tools/testing/selftests/diglim/Makefile
@@ -0,0 +1,19 @@
+# SPDX-License-Identifier: GPL-2.0
+LDFLAGS += -lcrypto
+
+CFLAGS += -O2 -Wall -Wl,-no-as-needed -g -I./ -I../../../../usr/include/ \
+ -L$(OUTPUT) -Wl,-rpath=./ -ggdb
+LDLIBS += -lpthread
+
+OVERRIDE_TARGETS = 1
+
+TEST_GEN_PROGS = selftest
+TEST_GEN_PROGS_EXTENDED = libcommon.so
+
+include ../lib.mk
+
+$(OUTPUT)/libcommon.so: common.c
+ $(CC) $(CFLAGS) -shared -fPIC $< $(LDLIBS) -o $@
+
+$(OUTPUT)/selftest: selftest.c $(TEST_GEN_PROGS_EXTENDED)
+ $(CC) $(CFLAGS) $< -o $@ $(LDFLAGS) -lcommon
diff --git a/tools/testing/selftests/diglim/common.c b/tools/testing/selftests/diglim/common.c
new file mode 100644
index 000000000000..20d693a4fc26
--- /dev/null
+++ b/tools/testing/selftests/diglim/common.c
@@ -0,0 +1,135 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2005,2006,2007,2008 IBM Corporation
+ * Copyright (C) 2017-2021 Huawei Technologies Duesseldorf GmbH
+ *
+ * Author: Roberto Sassu <[email protected]>
+ *
+ * Common functions.
+ */
+
+#include <sys/random.h>
+#include <errno.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <fcntl.h>
+#include <ctype.h>
+#include <malloc.h>
+#include <unistd.h>
+#include <string.h>
+#include <limits.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <linux/types.h>
+#include <linux/hash_info.h>
+
+#include "common.h"
+
+int write_buffer(char *path, char *buffer, size_t buffer_len, int uid)
+{
+ ssize_t to_write = buffer_len, written = 0;
+ int ret = 0, ret_seteuid, fd, cur_uid = geteuid();
+ int open_flags = O_WRONLY;
+ struct stat st;
+
+ if (stat(path, &st) == -1)
+ open_flags |= O_CREAT;
+
+ fd = open(path, open_flags, 0644);
+ if (fd < 0)
+ return -errno;
+
+ if (uid >= 0) {
+ ret_seteuid = seteuid(uid);
+ if (ret_seteuid < 0)
+ return ret_seteuid;
+ }
+
+ while (to_write) {
+ written = write(fd, buffer + buffer_len - to_write, to_write);
+ if (written <= 0) {
+ ret = -errno;
+ break;
+ }
+
+ to_write -= written;
+ }
+
+ if (uid >= 0) {
+ ret_seteuid = seteuid(cur_uid);
+ if (ret_seteuid < 0)
+ return ret_seteuid;
+ }
+
+ close(fd);
+ return ret;
+}
+
+int read_buffer(char *path, char **buffer, size_t *buffer_len, bool alloc,
+ bool is_char)
+{
+ ssize_t len = 0, read_len;
+ int ret = 0, fd;
+
+ fd = open(path, O_RDONLY);
+ if (fd < 0)
+ return -errno;
+
+ if (alloc) {
+ *buffer = NULL;
+ *buffer_len = 0;
+ }
+
+ while (1) {
+ if (alloc) {
+ if (*buffer_len == len) {
+ *buffer_len += BUFFER_SIZE;
+ *buffer = realloc(*buffer, *buffer_len + 1);
+ if (!*buffer) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ }
+ }
+
+ read_len = read(fd, *buffer + len, *buffer_len - len);
+ if (read_len < 0) {
+ ret = -errno;
+ goto out;
+ }
+
+ if (!read_len)
+ break;
+
+ len += read_len;
+ }
+
+ *buffer_len = len;
+ if (is_char)
+ (*buffer)[(*buffer_len)++] = '\0';
+out:
+ close(fd);
+ if (ret < 0) {
+ if (alloc) {
+ free(*buffer);
+ *buffer = NULL;
+ }
+ }
+
+ return ret;
+}
+
+int copy_file(char *src_path, char *dst_path)
+{
+ char *buffer;
+ size_t buffer_len;
+ int ret;
+
+ ret = read_buffer(src_path, &buffer, &buffer_len, true, false);
+ if (!ret) {
+ ret = write_buffer(dst_path, buffer, buffer_len, -1);
+ free(buffer);
+ }
+
+ return ret;
+}
diff --git a/tools/testing/selftests/diglim/common.h b/tools/testing/selftests/diglim/common.h
new file mode 100644
index 000000000000..6c7979f4182e
--- /dev/null
+++ b/tools/testing/selftests/diglim/common.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2005,2006,2007,2008 IBM Corporation
+ * Copyright (C) 2017-2021 Huawei Technologies Duesseldorf GmbH
+ *
+ * Author: Roberto Sassu <[email protected]>
+ *
+ * Header of common.c
+ */
+
+#include <sys/random.h>
+#include <errno.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <fcntl.h>
+#include <ctype.h>
+#include <malloc.h>
+#include <unistd.h>
+#include <string.h>
+#include <limits.h>
+#include <stdbool.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <linux/types.h>
+#include <linux/hash_info.h>
+
+#define BUFFER_SIZE 1024
+
+int write_buffer(char *path, char *buffer, size_t buffer_len, int uid);
+int read_buffer(char *path, char **buffer, size_t *buffer_len, bool alloc,
+ bool is_char);
+int copy_file(char *src_path, char *dst_path);
diff --git a/tools/testing/selftests/diglim/config b/tools/testing/selftests/diglim/config
new file mode 100644
index 000000000000..faafc742974c
--- /dev/null
+++ b/tools/testing/selftests/diglim/config
@@ -0,0 +1,3 @@
+CONFIG_DIGEST_LISTS=y
+CONFIG_FAULT_INJECTION=y
+CONFIG_FAULT_INJECTION_DEBUG_FS=y
diff --git a/tools/testing/selftests/diglim/selftest.c b/tools/testing/selftests/diglim/selftest.c
new file mode 100644
index 000000000000..273ba80c43fd
--- /dev/null
+++ b/tools/testing/selftests/diglim/selftest.c
@@ -0,0 +1,1442 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2005,2006,2007,2008 IBM Corporation
+ * Copyright (C) 2017-2021 Huawei Technologies Duesseldorf GmbH
+ *
+ * Author: Roberto Sassu <[email protected]>
+ *
+ * Functions to test DIGLIM.
+ */
+
+#include <sys/random.h>
+#include <errno.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <fcntl.h>
+#include <ctype.h>
+#include <malloc.h>
+#include <unistd.h>
+#include <string.h>
+#include <limits.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <linux/types.h>
+#include <linux/hash_info.h>
+#include <linux/diglim.h>
+#include <bits/endianness.h>
+
+#if __BYTE_ORDER == __BIG_ENDIAN
+#include <linux/byteorder/big_endian.h>
+#else
+#include <linux/byteorder/little_endian.h>
+#endif
+
+#include <openssl/evp.h>
+
+#include "common.h"
+#include "../kselftest_harness.h"
+
+typedef uint8_t u8;
+typedef uint16_t u16;
+typedef uint32_t u32;
+typedef uint64_t u64;
+
+#define MD5_DIGEST_SIZE 16
+#define SHA1_DIGEST_SIZE 20
+#define RMD160_DIGEST_SIZE 20
+#define SHA256_DIGEST_SIZE 32
+#define SHA384_DIGEST_SIZE 48
+#define SHA512_DIGEST_SIZE 64
+#define SHA224_DIGEST_SIZE 28
+#define RMD128_DIGEST_SIZE 16
+#define RMD256_DIGEST_SIZE 32
+#define RMD320_DIGEST_SIZE 40
+#define WP256_DIGEST_SIZE 32
+#define WP384_DIGEST_SIZE 48
+#define WP512_DIGEST_SIZE 64
+#define TGR128_DIGEST_SIZE 16
+#define TGR160_DIGEST_SIZE 20
+#define TGR192_DIGEST_SIZE 24
+#define SM3256_DIGEST_SIZE 32
+#define STREEBOG256_DIGEST_SIZE 32
+#define STREEBOG512_DIGEST_SIZE 64
+
+#define DIGEST_LIST_PATH_TEMPLATE "/tmp/digest_list.XXXXXX"
+
+#define INTEGRITY_DIR "/sys/kernel/security/integrity"
+
+#define DIGEST_LIST_DIR INTEGRITY_DIR "/diglim"
+#define DIGEST_QUERY_PATH DIGEST_LIST_DIR "/digest_query"
+#define DIGEST_LABEL_PATH DIGEST_LIST_DIR "/digest_list_label"
+#define DIGEST_LIST_ADD_PATH DIGEST_LIST_DIR "/digest_list_add"
+#define DIGEST_LIST_DEL_PATH DIGEST_LIST_DIR "/digest_list_del"
+#define DIGEST_LISTS_LOADED_PATH DIGEST_LIST_DIR "/digest_lists_loaded"
+#define DIGESTS_COUNT DIGEST_LIST_DIR "/digests_count"
+
+#define IMA_POLICY_PATH INTEGRITY_DIR "/ima/policy"
+#define IMA_MEASUREMENTS_PATH INTEGRITY_DIR "/ima/ascii_runtime_measurements"
+
+#define DIGEST_LIST_DEBUGFS_DIR "/sys/kernel/debug/fail_diglim"
+#define DIGEST_LIST_DEBUGFS_TASK_FILTER DIGEST_LIST_DEBUGFS_DIR "/task-filter"
+#define DIGEST_LIST_DEBUGFS_PROBABILITY DIGEST_LIST_DEBUGFS_DIR "/probability"
+#define DIGEST_LIST_DEBUGFS_TIMES DIGEST_LIST_DEBUGFS_DIR "/times"
+#define DIGEST_LIST_DEBUGFS_VERBOSE DIGEST_LIST_DEBUGFS_DIR "/verbose"
+#define PROCFS_SELF_FAULT "/proc/self/make-it-fail"
+
+#define MAX_LINE_LENGTH 512
+#define LABEL_LEN 32
+#define MAX_DIGEST_COUNT 100
+#define MAX_DIGEST_LISTS 100
+#define MAX_DIGEST_BLOCKS 10
+#define MAX_DIGEST_VALUE 10
+#define MAX_SEARCH_ATTEMPTS 10
+#define NUM_QUERIES 1000
+#define MAX_DIGEST_LIST_SIZE 10000
+#define NUM_ITERATIONS 100000
+
+enum upload_types { UPLOAD_FILE, UPLOAD_FILE_CHOWN, UPLOAD_BUFFER };
+
+const char *const hash_algo_name[HASH_ALGO__LAST] = {
+ [HASH_ALGO_MD4] = "md4",
+ [HASH_ALGO_MD5] = "md5",
+ [HASH_ALGO_SHA1] = "sha1",
+ [HASH_ALGO_RIPE_MD_160] = "rmd160",
+ [HASH_ALGO_SHA256] = "sha256",
+ [HASH_ALGO_SHA384] = "sha384",
+ [HASH_ALGO_SHA512] = "sha512",
+ [HASH_ALGO_SHA224] = "sha224",
+ [HASH_ALGO_RIPE_MD_128] = "rmd128",
+ [HASH_ALGO_RIPE_MD_256] = "rmd256",
+ [HASH_ALGO_RIPE_MD_320] = "rmd320",
+ [HASH_ALGO_WP_256] = "wp256",
+ [HASH_ALGO_WP_384] = "wp384",
+ [HASH_ALGO_WP_512] = "wp512",
+ [HASH_ALGO_TGR_128] = "tgr128",
+ [HASH_ALGO_TGR_160] = "tgr160",
+ [HASH_ALGO_TGR_192] = "tgr192",
+ [HASH_ALGO_SM3_256] = "sm3",
+ [HASH_ALGO_STREEBOG_256] = "streebog256",
+ [HASH_ALGO_STREEBOG_512] = "streebog512",
+};
+
+const int hash_digest_size[HASH_ALGO__LAST] = {
+ [HASH_ALGO_MD4] = MD5_DIGEST_SIZE,
+ [HASH_ALGO_MD5] = MD5_DIGEST_SIZE,
+ [HASH_ALGO_SHA1] = SHA1_DIGEST_SIZE,
+ [HASH_ALGO_RIPE_MD_160] = RMD160_DIGEST_SIZE,
+ [HASH_ALGO_SHA256] = SHA256_DIGEST_SIZE,
+ [HASH_ALGO_SHA384] = SHA384_DIGEST_SIZE,
+ [HASH_ALGO_SHA512] = SHA512_DIGEST_SIZE,
+ [HASH_ALGO_SHA224] = SHA224_DIGEST_SIZE,
+ [HASH_ALGO_RIPE_MD_128] = RMD128_DIGEST_SIZE,
+ [HASH_ALGO_RIPE_MD_256] = RMD256_DIGEST_SIZE,
+ [HASH_ALGO_RIPE_MD_320] = RMD320_DIGEST_SIZE,
+ [HASH_ALGO_WP_256] = WP256_DIGEST_SIZE,
+ [HASH_ALGO_WP_384] = WP384_DIGEST_SIZE,
+ [HASH_ALGO_WP_512] = WP512_DIGEST_SIZE,
+ [HASH_ALGO_TGR_128] = TGR128_DIGEST_SIZE,
+ [HASH_ALGO_TGR_160] = TGR160_DIGEST_SIZE,
+ [HASH_ALGO_TGR_192] = TGR192_DIGEST_SIZE,
+ [HASH_ALGO_SM3_256] = SM3256_DIGEST_SIZE,
+ [HASH_ALGO_STREEBOG_256] = STREEBOG256_DIGEST_SIZE,
+ [HASH_ALGO_STREEBOG_512] = STREEBOG512_DIGEST_SIZE,
+};
+
+struct digest_list_item {
+ unsigned long long size;
+ u8 *buf;
+ u8 actions;
+ char digest_str[64 * 2 + 1];
+ enum hash_algo algo;
+ char filename_suffix[6 + 1];
+};
+
+static const char hex_asc[] = "0123456789abcdef";
+
+#define hex_asc_lo(x) hex_asc[((x) & 0x0f)]
+#define hex_asc_hi(x) hex_asc[((x) & 0xf0) >> 4]
+
+static inline char *hex_byte_pack(char *buf, unsigned char byte)
+{
+ *buf++ = hex_asc_hi(byte);
+ *buf++ = hex_asc_lo(byte);
+ return buf;
+}
+
+/* from lib/hexdump.c (Linux kernel) */
+static int hex_to_bin(char ch)
+{
+ if ((ch >= '0') && (ch <= '9'))
+ return ch - '0';
+ ch = tolower(ch);
+ if ((ch >= 'a') && (ch <= 'f'))
+ return ch - 'a' + 10;
+ return -1;
+}
+
+int _hex2bin(unsigned char *dst, const char *src, size_t count)
+{
+ while (count--) {
+ int hi = hex_to_bin(*src++);
+ int lo = hex_to_bin(*src++);
+
+ if ((hi < 0) || (lo < 0))
+ return -1;
+
+ *dst++ = (hi << 4) | lo;
+ }
+ return 0;
+}
+
+char *_bin2hex(char *dst, const void *src, size_t count)
+{
+ const unsigned char *_src = src;
+
+ while (count--)
+ dst = hex_byte_pack(dst, *_src++);
+ return dst;
+}
+
+static void set_hdr(u8 *buf, struct compact_list_hdr *hdr)
+{
+ memcpy(hdr, buf, sizeof(*hdr));
+ hdr->type = __le16_to_cpu(hdr->type);
+ hdr->modifiers = __le16_to_cpu(hdr->modifiers);
+ hdr->algo = __le16_to_cpu(hdr->algo);
+ hdr->count = __le32_to_cpu(hdr->count);
+ hdr->datalen = __le32_to_cpu(hdr->datalen);
+}
+
+u32 num_max_digest_lists = MAX_DIGEST_LISTS;
+u32 digest_lists_pos;
+struct digest_list_item *digest_lists[MAX_DIGEST_LISTS];
+
+enum hash_algo ima_hash_algo = HASH_ALGO__LAST;
+
+static enum hash_algo get_ima_hash_algo(void)
+{
+ char *measurement_list, *measurement_list_ptr;
+ size_t measurement_list_len;
+ int ret, i = 0;
+
+ if (ima_hash_algo != HASH_ALGO__LAST)
+ return ima_hash_algo;
+
+ ret = read_buffer(IMA_MEASUREMENTS_PATH, &measurement_list,
+ &measurement_list_len, true, true);
+ if (ret < 0)
+ return HASH_ALGO_SHA256;
+
+ measurement_list_ptr = measurement_list;
+ while ((strsep(&measurement_list_ptr, " ")) && i++ < 2)
+ ;
+
+ for (i = 0; i < HASH_ALGO__LAST; i++) {
+ if (!strncmp(hash_algo_name[i], measurement_list_ptr,
+ strlen(hash_algo_name[i]))) {
+ ima_hash_algo = i;
+ break;
+ }
+ }
+
+ free(measurement_list);
+ return ima_hash_algo;
+}
+
+int calc_digest(u8 *digest, void *data, u64 len, enum hash_algo algo)
+{
+ EVP_MD_CTX *mdctx;
+ const EVP_MD *md;
+ int ret = -EINVAL;
+
+ OpenSSL_add_all_algorithms();
+
+ md = EVP_get_digestbyname(hash_algo_name[algo]);
+ if (!md)
+ goto out;
+
+ mdctx = EVP_MD_CTX_create();
+ if (!mdctx)
+ goto out;
+
+ if (EVP_DigestInit_ex(mdctx, md, NULL) != 1)
+ goto out_mdctx;
+
+ if (EVP_DigestUpdate(mdctx, data, len) != 1)
+ goto out_mdctx;
+
+ if (EVP_DigestFinal_ex(mdctx, digest, NULL) != 1)
+ goto out_mdctx;
+
+ ret = 0;
+out_mdctx:
+ EVP_MD_CTX_destroy(mdctx);
+out:
+ EVP_cleanup();
+ return ret;
+}
+
+int calc_file_digest(u8 *digest, char *path, enum hash_algo algo)
+{
+ void *data = MAP_FAILED;
+ struct stat st;
+ int fd, ret = 0;
+
+ if (stat(path, &st) == -1)
+ return -EACCES;
+
+ fd = open(path, O_RDONLY);
+ if (fd < 0)
+ return -errno;
+
+ if (st.st_size) {
+ data = mmap(NULL, st.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
+ if (data == MAP_FAILED) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ }
+
+ ret = calc_digest(digest, data, st.st_size, algo);
+out:
+ if (data != MAP_FAILED)
+ munmap(data, st.st_size);
+
+ close(fd);
+ return ret;
+}
+
+static struct digest_list_item *digest_list_generate(void)
+{
+ struct digest_list_item *digest_list;
+ struct compact_list_hdr *hdr_array = NULL, *hdr;
+ u8 *buf_ptr;
+ u32 num_digest_blocks = 0;
+ u8 digest[64];
+ int ret, i, j;
+
+ digest_list = calloc(1, sizeof(*digest_list));
+ if (!digest_list)
+ return NULL;
+
+ digest_list->buf = NULL;
+
+ while (!num_digest_blocks) {
+ ret = getrandom(&num_digest_blocks,
+ sizeof(num_digest_blocks), 0);
+ if (ret < 0)
+ goto out;
+
+ num_digest_blocks = num_digest_blocks % MAX_DIGEST_BLOCKS;
+ }
+
+ hdr_array = calloc(num_digest_blocks, sizeof(*hdr_array));
+ if (!hdr_array)
+ goto out;
+
+ for (i = 0; i < num_digest_blocks; i++) {
+ ret = getrandom(&hdr_array[i], sizeof(hdr_array[i]), 0);
+ if (ret < 0)
+ goto out;
+
+ hdr_array[i].version = 1;
+ hdr_array[i]._reserved = 0;
+ /* COMPACT_DIGEST_LIST type is not allowed. */
+ hdr_array[i].type = hdr_array[i].type % (COMPACT__LAST - 1);
+ hdr_array[i].modifiers =
+ hdr_array[i].modifiers % (1 << COMPACT_MOD_IMMUTABLE) + 1;
+ hdr_array[i].algo = hdr_array[i].algo % HASH_ALGO_RIPE_MD_128;
+ hdr_array[i].count = hdr_array[i].count % MAX_DIGEST_COUNT;
+
+ while (!hdr_array[i].count) {
+ ret = getrandom(&hdr_array[i].count,
+ sizeof(hdr_array[i].count), 0);
+ if (ret < 0)
+ goto out;
+
+ hdr_array[i].count =
+ hdr_array[i].count % MAX_DIGEST_COUNT;
+ }
+
+ hdr_array[i].datalen =
+ hdr_array[i].count * hash_digest_size[hdr_array[i].algo];
+
+ digest_list->size += sizeof(*hdr_array) + hdr_array[i].datalen;
+ }
+
+ digest_list->buf = calloc(digest_list->size, sizeof(unsigned char));
+ if (!digest_list->buf) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ buf_ptr = digest_list->buf;
+
+ for (i = 0; i < num_digest_blocks; i++) {
+ memcpy(buf_ptr, &hdr_array[i], sizeof(*hdr_array));
+ hdr = (struct compact_list_hdr *)buf_ptr;
+ hdr->type = __cpu_to_le16(hdr->type);
+ hdr->modifiers = __cpu_to_le16(hdr->modifiers);
+ hdr->algo = __cpu_to_le16(hdr->algo);
+ hdr->count = __cpu_to_le32(hdr->count);
+ hdr->datalen = __cpu_to_le32(hdr->datalen);
+
+ buf_ptr += sizeof(*hdr_array);
+
+ for (j = 0; j < hdr_array[i].count; j++) {
+ ret = getrandom(buf_ptr, sizeof(u32), 0);
+ if (ret < 0)
+ goto out;
+
+ *(u32 *)buf_ptr = *(u32 *)buf_ptr % MAX_DIGEST_VALUE;
+ buf_ptr += hash_digest_size[hdr_array[i].algo];
+ }
+ }
+
+ digest_list->algo = get_ima_hash_algo();
+ if (digest_list->algo == HASH_ALGO__LAST) {
+ ret = -ENOENT;
+ goto out;
+ }
+
+ ret = calc_digest(digest, digest_list->buf, digest_list->size,
+ digest_list->algo);
+ if (ret < 0)
+ goto out;
+
+ _bin2hex(digest_list->digest_str, digest,
+ hash_digest_size[digest_list->algo]);
+
+ ret = 0;
+out:
+ if (ret < 0) {
+ free(digest_list->buf);
+ free(digest_list);
+ }
+
+ free(hdr_array);
+ return !ret ? digest_list : NULL;
+}
+
+static struct digest_list_item *digest_list_generate_random(void)
+{
+ struct digest_list_item *digest_list;
+ struct compact_list_hdr *hdr;
+ u32 size = 0;
+ u8 digest[64];
+ int ret;
+
+ digest_list = calloc(1, sizeof(*digest_list));
+ if (!digest_list)
+ return NULL;
+
+ while (!size) {
+ ret = getrandom(&size, sizeof(size), 0);
+ if (ret < 0)
+ goto out;
+
+ size = size % MAX_DIGEST_LIST_SIZE;
+ }
+
+ digest_list->size = size;
+ digest_list->buf = calloc(digest_list->size, sizeof(unsigned char));
+ if (!digest_list->buf) {
+ free(digest_list);
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ ret = getrandom(digest_list->buf, digest_list->size, 0);
+ if (ret < 0)
+ goto out;
+
+ hdr = (struct compact_list_hdr *)digest_list->buf;
+ hdr->version = 1;
+ hdr->_reserved = 0;
+ hdr->type = hdr->type % (COMPACT__LAST - 1);
+ hdr->algo = hdr->algo % HASH_ALGO__LAST;
+
+ hdr->type = __cpu_to_le16(hdr->type);
+ hdr->modifiers = __cpu_to_le16(hdr->modifiers);
+ hdr->algo = __cpu_to_le16(hdr->algo);
+ hdr->count = __cpu_to_le32(hdr->count);
+ hdr->datalen = __cpu_to_le32(hdr->datalen);
+
+ digest_list->algo = get_ima_hash_algo();
+ if (digest_list->algo == HASH_ALGO__LAST) {
+ ret = -ENOENT;
+ goto out;
+ }
+
+ ret = calc_digest(digest, digest_list->buf, digest_list->size,
+ digest_list->algo);
+ if (ret < 0)
+ goto out;
+
+ _bin2hex(digest_list->digest_str, digest,
+ hash_digest_size[digest_list->algo]);
+
+ ret = 0;
+out:
+ if (ret < 0) {
+ free(digest_list->buf);
+ free(digest_list);
+ }
+
+ return !ret ? digest_list : NULL;
+}
+
+static int digest_list_upload(struct digest_list_item *digest_list, enum ops op,
+ enum upload_types upload_type, int uid)
+{
+ char path_template[] = DIGEST_LIST_PATH_TEMPLATE;
+ char *path_upload = DIGEST_LIST_ADD_PATH, *basename;
+ unsigned char *buffer = digest_list->buf;
+ size_t buffer_len = digest_list->size;
+ unsigned char rnd[3];
+ int ret = 0, fd;
+
+ if (op == DIGEST_LIST_ADD) {
+ if (upload_type == UPLOAD_FILE ||
+ upload_type == UPLOAD_FILE_CHOWN) {
+ fd = mkstemp(path_template);
+ if (fd < 0)
+ return -EPERM;
+
+ if (upload_type == UPLOAD_FILE_CHOWN)
+ ret = fchown(fd, 3000, -1);
+
+ fchmod(fd, 0644);
+ close(fd);
+
+ if (ret < 0)
+ goto out;
+
+ ret = write_buffer(path_template,
+ (char *)digest_list->buf,
+ digest_list->size, -1);
+ if (ret < 0)
+ goto out;
+
+ buffer = (unsigned char *)path_template;
+ buffer_len = strlen(path_template);
+ } else {
+ ret = getrandom(rnd, sizeof(rnd), 0);
+ if (ret < 0)
+ goto out;
+
+ _bin2hex(path_template +
+ sizeof(DIGEST_LIST_PATH_TEMPLATE) - 7, rnd,
+ sizeof(rnd));
+ }
+
+ memcpy(digest_list->filename_suffix,
+ path_template + sizeof(DIGEST_LIST_PATH_TEMPLATE) - 7,
+ 6);
+ } else {
+ memcpy(path_template + sizeof(DIGEST_LIST_PATH_TEMPLATE) - 7,
+ digest_list->filename_suffix, 6);
+ path_upload = DIGEST_LIST_DEL_PATH;
+ if (upload_type == UPLOAD_FILE ||
+ upload_type == UPLOAD_FILE_CHOWN) {
+ buffer = (unsigned char *)path_template;
+ buffer_len = strlen(path_template);
+ }
+ }
+
+ if (upload_type == UPLOAD_BUFFER) {
+ basename = strrchr(path_template, '/') + 1;
+ ret = write_buffer(DIGEST_LABEL_PATH, basename,
+ strlen(basename), -1);
+ if (ret < 0)
+ goto out;
+ }
+
+ ret = write_buffer(path_upload, (char *)buffer, buffer_len, uid);
+out:
+ if ((op == DIGEST_LIST_ADD && ret < 0) ||
+ (op == DIGEST_LIST_DEL && !ret))
+ unlink(path_template);
+
+ return ret;
+}
+
+static int digest_list_check(struct digest_list_item *digest_list, enum ops op)
+{
+ char path[PATH_MAX];
+ u8 digest_list_buf[MAX_LINE_LENGTH];
+ char digest_list_info[MAX_LINE_LENGTH];
+ ssize_t size = digest_list->size;
+ struct compact_list_hdr hdr;
+ struct stat st;
+ int ret = 0, i, fd, path_len, len, read_len;
+
+ path_len = snprintf(path, sizeof(path), "%s/%s-%s-digest_list.%s.ascii",
+ DIGEST_LISTS_LOADED_PATH,
+ hash_algo_name[digest_list->algo],
+ digest_list->digest_str,
+ digest_list->filename_suffix);
+
+ path[path_len - 6] = '\0';
+
+ if (op == DIGEST_LIST_DEL) {
+ if (stat(path, &st) != -1)
+ return -EEXIST;
+
+ path[path_len - 6] = '.';
+
+ if (stat(path, &st) != -1)
+ return -EEXIST;
+
+ return 0;
+ }
+
+ fd = open(path, O_RDONLY);
+ if (fd < 0)
+ return -errno;
+
+ while (size) {
+ len = read(fd, digest_list_buf, sizeof(digest_list_buf));
+ if (len <= 0) {
+ ret = -errno;
+ goto out;
+ }
+
+ if (memcmp(digest_list_buf,
+ digest_list->buf + digest_list->size - size, len)) {
+ ret = -EIO;
+ goto out;
+ }
+
+ size -= len;
+ }
+
+ close(fd);
+
+ path[path_len - 6] = '.';
+
+ fd = open(path, O_RDONLY);
+ if (fd < 0)
+ return -errno;
+
+ size = digest_list->size;
+ while (size) {
+ set_hdr(digest_list->buf + digest_list->size - size, &hdr);
+
+ /* From digest_list_show_common(). */
+ len = snprintf(digest_list_info, sizeof(digest_list_info),
+ "actions: %d, version: %d, algo: %s, type: %d, modifiers: %d, count: %d, datalen: %d\n",
+ digest_list->actions, hdr.version,
+ hash_algo_name[hdr.algo], hdr.type, hdr.modifiers,
+ hdr.count, hdr.datalen);
+
+ read_len = read(fd, digest_list_buf, len);
+
+ if (read_len != len ||
+ memcmp(digest_list_info, digest_list_buf, len)) {
+ ret = -EIO;
+ goto out;
+ }
+
+ size -= sizeof(hdr);
+
+ for (i = 0; i < hdr.count; i++) {
+ _bin2hex(digest_list_info,
+ digest_list->buf + digest_list->size - size,
+ hash_digest_size[hdr.algo]);
+
+ read_len = read(fd, digest_list_buf,
+ hash_digest_size[hdr.algo] * 2 + 1);
+
+ if (read_len != hash_digest_size[hdr.algo] * 2 + 1 ||
+ memcmp(digest_list_info, digest_list_buf,
+ read_len - 1) ||
+ digest_list_buf[read_len - 1] != '\n') {
+ ret = -EIO;
+ goto out;
+ }
+
+ size -= hash_digest_size[hdr.algo];
+ }
+ }
+out:
+ close(fd);
+ return ret;
+}
+
+static int digest_list_query(u8 *digest, enum hash_algo algo,
+ char **query_result)
+{
+ ssize_t len, to_write, written;
+ char query[256] = { 0 };
+ size_t query_result_len;
+ int ret = 0, fd;
+
+ len = snprintf(query, sizeof(query), "%s-", hash_algo_name[algo]);
+
+ _bin2hex(query + len, digest, hash_digest_size[algo]);
+ len += hash_digest_size[algo] * 2 + 1;
+
+ fd = open(DIGEST_QUERY_PATH, O_WRONLY);
+ if (fd < 0)
+ return -errno;
+
+ to_write = len;
+
+ while (to_write) {
+ written = write(fd, query + len - to_write, to_write);
+ if (written <= 0) {
+ ret = -errno;
+ break;
+ }
+
+ to_write -= written;
+ }
+
+ close(fd);
+ if (ret < 0)
+ return ret;
+
+ return read_buffer(DIGEST_QUERY_PATH, query_result, &query_result_len,
+ true, true);
+}
+
+static int *get_count_gen_lists(u8 *digest, enum hash_algo algo,
+ bool is_digest_list)
+{
+ struct compact_list_hdr hdr;
+ u8 *buf_ptr;
+ unsigned long long size;
+ struct digest_list_item *digest_list;
+ u8 digest_list_digest[64];
+ int i, j, *count;
+
+ count = calloc(num_max_digest_lists, sizeof(*count));
+ if (!count)
+ return count;
+
+ for (i = 0; i < num_max_digest_lists; i++) {
+ if (!digest_lists[i])
+ continue;
+
+ digest_list = digest_lists[i];
+ size = digest_lists[i]->size;
+ buf_ptr = digest_lists[i]->buf;
+
+ if (is_digest_list) {
+ _hex2bin(digest_list_digest, digest_list->digest_str,
+ hash_digest_size[digest_list->algo]);
+ if (!memcmp(digest_list_digest, digest,
+ hash_digest_size[digest_list->algo]))
+ count[i]++;
+
+ continue;
+ }
+
+ while (size) {
+ set_hdr(buf_ptr, &hdr);
+
+ if (hdr.algo != algo) {
+ buf_ptr += sizeof(hdr) + hdr.datalen;
+ size -= sizeof(hdr) + hdr.datalen;
+ continue;
+ }
+
+ buf_ptr += sizeof(hdr);
+ size -= sizeof(hdr);
+
+ for (j = 0; j < hdr.count; j++) {
+ if (!memcmp(digest, buf_ptr,
+ hash_digest_size[algo]))
+ count[i]++;
+ buf_ptr += hash_digest_size[algo];
+ size -= hash_digest_size[algo];
+ }
+ }
+ }
+
+ return count;
+}
+
+static int *get_count_kernel_query(u8 *digest, enum hash_algo algo,
+ bool is_digest_list)
+{
+ char *query_result = NULL, *query_result_ptr, *line;
+ char digest_list_info[MAX_LINE_LENGTH];
+ char label[256];
+ struct compact_list_hdr hdr;
+ struct digest_list_item *digest_list;
+ unsigned long long size, size_info;
+ int ret, i, *count = NULL;
+
+ count = calloc(num_max_digest_lists, sizeof(*count));
+ if (!count)
+ return count;
+
+ ret = digest_list_query(digest, algo, &query_result);
+ if (ret < 0)
+ goto out;
+
+ query_result_ptr = query_result;
+
+ while ((line = strsep(&query_result_ptr, "\n"))) {
+ if (!strlen(line))
+ continue;
+
+ for (i = 0; i < num_max_digest_lists; i++) {
+ if (!digest_lists[i])
+ continue;
+
+ digest_list = digest_lists[i];
+ size = digest_list->size;
+
+ if (is_digest_list) {
+ snprintf(label, sizeof(label),
+ "%s-%s-digest_list.%s",
+ hash_algo_name[digest_list->algo],
+ digest_list->digest_str,
+ digest_list->filename_suffix);
+
+ /* From digest_query_show(). */
+ size_info = snprintf(digest_list_info,
+ sizeof(digest_list_info),
+ "%s (actions: %d): type: %d, size: %lld\n",
+ label, digest_list->actions,
+ COMPACT_DIGEST_LIST, size);
+
+ /* strsep() replaced '\n' with '\0' in line. */
+ digest_list_info[size_info - 1] = '\0';
+
+ if (!strcmp(digest_list_info, line))
+ count[i]++;
+
+ continue;
+ }
+
+ while (size) {
+ set_hdr(digest_list->buf + digest_list->size -
+ size, &hdr);
+ size -= sizeof(hdr) + hdr.datalen;
+
+ snprintf(label, sizeof(label),
+ "%s-%s-digest_list.%s",
+ hash_algo_name[digest_list->algo],
+ digest_list->digest_str,
+ digest_list->filename_suffix);
+
+ /* From digest_query_show(). */
+ size_info = snprintf(digest_list_info,
+ sizeof(digest_list_info),
+ "%s (actions: %d): version: %d, algo: %s, type: %d, modifiers: %d, count: %d, datalen: %d\n",
+ label, digest_list->actions,
+ hdr.version,
+ hash_algo_name[hdr.algo], hdr.type,
+ hdr.modifiers, hdr.count,
+ hdr.datalen);
+
+ /* strsep() replaced '\n' with '\0' in line. */
+ digest_list_info[size_info - 1] = '\0';
+
+ if (!strcmp(digest_list_info, line)) {
+ count[i]++;
+ break;
+ }
+ }
+ }
+ }
+out:
+ free(query_result);
+ if (ret < 0)
+ free(count);
+
+ return (!ret) ? count : NULL;
+}
+
+static int compare_count(u8 *digest, enum hash_algo algo,
+ bool is_digest_list, struct __test_metadata *_metadata)
+{
+ int *count_gen_list_array, *count_kernel_query_array;
+ int count_gen_list = 0, count_kernel_query = 0;
+ char digest_str[64 * 2 + 1] = { 0 };
+ int i;
+
+ count_gen_list_array = get_count_gen_lists(digest, algo,
+ is_digest_list);
+ if (!count_gen_list_array)
+ return -EINVAL;
+
+ count_kernel_query_array = get_count_kernel_query(digest, algo,
+ is_digest_list);
+ if (!count_kernel_query_array) {
+ free(count_gen_list_array);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < num_max_digest_lists; i++) {
+ count_gen_list += count_gen_list_array[i];
+ count_kernel_query += count_kernel_query_array[i];
+ }
+
+ _bin2hex(digest_str, digest, hash_digest_size[algo]);
+
+ TH_LOG("digest: %s, algo: %s, gen list digests: %d, kernel digests: %d",
+ digest_str, hash_algo_name[algo], count_gen_list,
+ count_kernel_query);
+ free(count_gen_list_array);
+ free(count_kernel_query_array);
+ return (count_gen_list == count_kernel_query) ? 0 : -EINVAL;
+}
+
+static void digest_list_delete_all(struct __test_metadata *_metadata,
+ enum upload_types upload_type)
+{
+ int ret, i;
+
+ for (i = 0; i < MAX_DIGEST_LISTS; i++) {
+ if (!digest_lists[i])
+ continue;
+
+ ret = digest_list_upload(digest_lists[i], DIGEST_LIST_DEL,
+ upload_type, -1);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_upload() failed\n");
+ }
+
+ free(digest_lists[i]->buf);
+ free(digest_lists[i]);
+ digest_lists[i] = NULL;
+ }
+}
+
+FIXTURE(test)
+{
+ enum upload_types upload_type;
+};
+
+FIXTURE_SETUP(test)
+{
+}
+
+FIXTURE_TEARDOWN(test)
+{
+ digest_list_delete_all(_metadata, self->upload_type);
+}
+
+static int enable_fault_injection(void)
+{
+ int ret;
+
+ ret = write_buffer(DIGEST_LIST_DEBUGFS_TASK_FILTER, "Y", 1, -1);
+ if (ret < 0)
+ return ret;
+
+ ret = write_buffer(DIGEST_LIST_DEBUGFS_PROBABILITY, "1", 1, -1);
+ if (ret < 0)
+ return ret;
+
+ ret = write_buffer(DIGEST_LIST_DEBUGFS_TIMES, "10000", 5, -1);
+ if (ret < 0)
+ return ret;
+
+ ret = write_buffer(DIGEST_LIST_DEBUGFS_VERBOSE, "1", 1, -1);
+ if (ret < 0)
+ return ret;
+
+ ret = write_buffer(PROCFS_SELF_FAULT, "1", 1, -1);
+ if (ret < 0)
+ return ret;
+
+ return 0;
+}
+
+static void digest_list_add_del_test(struct __test_metadata *_metadata,
+ int fault_injection,
+ enum upload_types upload_type)
+{
+ u32 value;
+ enum ops op;
+ enum hash_algo algo;
+ u8 digest[64];
+ int ret, i, cur_queries = 1;
+
+ while (cur_queries <= NUM_QUERIES) {
+ ret = getrandom(&op, sizeof(op), 0);
+ ASSERT_EQ(sizeof(op), ret) {
+ TH_LOG("getrandom() failed\n");
+ }
+
+ op = op % 2;
+
+ switch (op) {
+ case DIGEST_LIST_ADD:
+ TH_LOG("add digest list...");
+ for (digest_lists_pos = 0;
+ digest_lists_pos < num_max_digest_lists;
+ digest_lists_pos++)
+ if (!digest_lists[digest_lists_pos])
+ break;
+
+ if (digest_lists_pos == num_max_digest_lists)
+ continue;
+
+ digest_lists[digest_lists_pos] = digest_list_generate();
+ ASSERT_NE(NULL, digest_lists[digest_lists_pos]) {
+ TH_LOG("digest_list_generate() failed");
+ }
+
+ ret = digest_list_upload(digest_lists[digest_lists_pos],
+ op, upload_type, -1);
+ /* Handle failures from fault injection. */
+ if (fault_injection && ret < 0) {
+ TH_LOG("handle failure...");
+ ret = digest_list_check(
+ digest_lists[digest_lists_pos],
+ DIGEST_LIST_DEL);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_check() failed");
+ }
+
+ free(digest_lists[digest_lists_pos]->buf);
+ free(digest_lists[digest_lists_pos]);
+ digest_lists[digest_lists_pos] = NULL;
+ break;
+ }
+
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_upload() failed");
+ }
+
+ ret = digest_list_check(digest_lists[digest_lists_pos],
+ op);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_check() failed");
+ }
+
+ break;
+ case DIGEST_LIST_DEL:
+ TH_LOG("delete digest list...");
+ for (digest_lists_pos = 0;
+ digest_lists_pos < num_max_digest_lists;
+ digest_lists_pos++)
+ if (digest_lists[digest_lists_pos])
+ break;
+
+ if (digest_lists_pos == num_max_digest_lists)
+ continue;
+
+ for (i = 0; i < MAX_SEARCH_ATTEMPTS; i++) {
+ ret = getrandom(&digest_lists_pos,
+ sizeof(digest_lists_pos), 0);
+ ASSERT_EQ(sizeof(digest_lists_pos), ret) {
+ TH_LOG("getrandom() failed");
+ }
+
+ digest_lists_pos =
+ digest_lists_pos % num_max_digest_lists;
+
+ if (digest_lists[digest_lists_pos])
+ break;
+ }
+
+ if (i == MAX_SEARCH_ATTEMPTS) {
+ for (digest_lists_pos = 0;
+ digest_lists_pos < num_max_digest_lists;
+ digest_lists_pos++)
+ if (digest_lists[digest_lists_pos])
+ break;
+
+ if (digest_lists_pos == num_max_digest_lists)
+ continue;
+ }
+
+ ret = digest_list_upload(digest_lists[digest_lists_pos],
+ op, upload_type, -1);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_upload() failed");
+ }
+
+ ret = digest_list_check(digest_lists[digest_lists_pos],
+ op);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_check() failed");
+ }
+
+ free(digest_lists[digest_lists_pos]->buf);
+ free(digest_lists[digest_lists_pos]);
+ digest_lists[digest_lists_pos] = NULL;
+ break;
+ default:
+ break;
+ }
+
+ ret = getrandom(&value, sizeof(value), 0);
+ ASSERT_EQ(sizeof(value), ret) {
+ TH_LOG("getrandom() failed");
+ }
+
+ value = value % 10;
+
+ if (value != 1)
+ continue;
+
+ ret = getrandom(&value, sizeof(value), 0);
+ ASSERT_EQ(sizeof(value), ret) {
+ TH_LOG("getrandom() failed");
+ }
+
+ value = value % MAX_DIGEST_VALUE;
+
+ ret = getrandom(&algo, sizeof(algo), 0);
+ ASSERT_EQ(sizeof(algo), ret) {
+ TH_LOG("getrandom() failed");
+ }
+
+ algo = algo % HASH_ALGO_RIPE_MD_128;
+
+ memset(digest, 0, sizeof(digest));
+ *(u32 *)digest = value;
+
+ ret = compare_count(digest, algo, false, _metadata);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("count mismatch");
+ }
+
+ ret = getrandom(&value, sizeof(value), 0);
+ ASSERT_EQ(sizeof(value), ret) {
+ TH_LOG("getrandom() failed");
+ }
+
+ value = value % MAX_DIGEST_LISTS;
+
+ if (digest_lists[value] != NULL) {
+ _hex2bin(digest, digest_lists[value]->digest_str,
+ hash_digest_size[digest_lists[value]->algo]);
+
+ ret = compare_count(digest, digest_lists[value]->algo,
+ true, _metadata);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("count mismatch");
+ }
+ }
+
+ TH_LOG("query digest lists (%d/%d)...", cur_queries,
+ NUM_QUERIES);
+
+ cur_queries++;
+ }
+}
+
+TEST_F_TIMEOUT(test, digest_list_add_del_test_file_upload, UINT_MAX)
+{
+ self->upload_type = UPLOAD_FILE;
+ digest_list_add_del_test(_metadata, 0, self->upload_type);
+}
+
+TEST_F_TIMEOUT(test, digest_list_add_del_test_file_upload_fault, UINT_MAX)
+{
+ int ret;
+
+ self->upload_type = UPLOAD_FILE;
+
+ ret = enable_fault_injection();
+ ASSERT_EQ(0, ret) {
+ TH_LOG("enable_fault_injection() failed");
+ }
+
+ digest_list_add_del_test(_metadata, 1, self->upload_type);
+}
+
+TEST_F_TIMEOUT(test, digest_list_add_del_test_buffer_upload, UINT_MAX)
+{
+ self->upload_type = UPLOAD_BUFFER;
+ digest_list_add_del_test(_metadata, 0, self->upload_type);
+}
+
+TEST_F_TIMEOUT(test, digest_list_add_del_test_buffer_upload_fault, UINT_MAX)
+{
+ int ret;
+
+ self->upload_type = UPLOAD_BUFFER;
+
+ ret = enable_fault_injection();
+ ASSERT_EQ(0, ret) {
+ TH_LOG("enable_fault_injection() failed");
+ }
+
+ digest_list_add_del_test(_metadata, 1, self->upload_type);
+}
+
+FIXTURE(test_fuzzing)
+{
+};
+
+FIXTURE_SETUP(test_fuzzing)
+{
+}
+
+FIXTURE_TEARDOWN(test_fuzzing)
+{
+}
+
+TEST_F_TIMEOUT(test_fuzzing, digest_list_fuzzing_test, UINT_MAX)
+{
+ char digests_count_before[256] = { 0 };
+ char *digests_count_before_ptr = digests_count_before;
+ char digests_count_after[256] = { 0 };
+ char *digests_count_after_ptr = digests_count_after;
+ size_t len = sizeof(digests_count_before) - 1;
+ struct digest_list_item *digest_list;
+ int ret, i;
+
+ ret = read_buffer(DIGESTS_COUNT, &digests_count_before_ptr, &len,
+ false, true);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("read_buffer() failed");
+ }
+
+ for (i = 1; i <= NUM_ITERATIONS; i++) {
+ TH_LOG("add digest list (%d/%d)...", i, NUM_ITERATIONS);
+
+ digest_list = digest_list_generate_random();
+ ASSERT_NE(NULL, digest_list) {
+ TH_LOG("digest_list_generate() failed");
+ }
+
+ ret = digest_list_upload(digest_list, DIGEST_LIST_ADD,
+ UPLOAD_FILE, -1);
+ if (!ret) {
+ ret = digest_list_check(digest_list, DIGEST_LIST_ADD);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_check() failed");
+ }
+
+ ret = digest_list_upload(digest_list,
+ DIGEST_LIST_DEL, UPLOAD_FILE,
+ -1);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_upload() failed");
+ }
+
+ ret = digest_list_check(digest_list, DIGEST_LIST_DEL);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_check() failed");
+ }
+ }
+
+ free(digest_list->buf);
+ free(digest_list);
+ }
+
+ ret = read_buffer(DIGESTS_COUNT, &digests_count_after_ptr, &len, false,
+ true);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("read_buffer() failed");
+ }
+
+ ASSERT_STREQ(digests_count_before, digests_count_after);
+}
+
+#define IMA_MEASURE_RULES "measure func=CRITICAL_DATA label=diglim euid=1000 \nmeasure func=FILE_CHECK fowner=3000 \n"
+
+static int load_ima_policy(char *policy)
+{
+ char *cur_ima_policy = NULL;
+ size_t cur_ima_policy_len = 0;
+ bool rule_found = false;
+ int ret;
+
+ ret = read_buffer(IMA_POLICY_PATH, &cur_ima_policy, &cur_ima_policy_len,
+ true, true);
+ if (ret < 0)
+ return ret;
+
+ rule_found = (strstr(cur_ima_policy, policy) != NULL);
+ free(cur_ima_policy);
+
+ if (!rule_found) {
+ ret = write_buffer(IMA_POLICY_PATH, policy, strlen(policy), -1);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+FIXTURE(test_measure)
+{
+};
+
+FIXTURE_SETUP(test_measure)
+{
+ int ret;
+
+ ret = load_ima_policy(IMA_MEASURE_RULES);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("load_ima_policy() failed");
+ }
+}
+
+FIXTURE_TEARDOWN(test_measure)
+{
+}
+
+static void digest_list_add_del_test_file_upload_measured_common(
+ struct __test_metadata *_metadata,
+ enum upload_types upload_type, uid_t uid)
+{
+ struct digest_list_item *digest_list;
+ int ret;
+
+ digest_list = digest_list_generate();
+ ASSERT_NE(NULL, digest_list) {
+ TH_LOG("digest_list_generate() failed");
+ }
+
+ digest_list->actions |= (1 << COMPACT_ACTION_IMA_MEASURED);
+
+ ret = digest_list_upload(digest_list, DIGEST_LIST_ADD, upload_type,
+ uid);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_upload() failed");
+ }
+
+ ret = digest_list_check(digest_list, DIGEST_LIST_ADD);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_check() failed");
+ }
+
+ ret = digest_list_upload(digest_list, DIGEST_LIST_DEL,
+ upload_type, uid);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_upload() failed");
+ }
+
+ ret = digest_list_check(digest_list, DIGEST_LIST_DEL);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_check() failed");
+ }
+
+ free(digest_list->buf);
+ free(digest_list);
+}
+
+TEST_F_TIMEOUT(test_measure, digest_list_add_del_test_file_upload_measured,
+ UINT_MAX)
+{
+ digest_list_add_del_test_file_upload_measured_common(_metadata,
+ UPLOAD_FILE, 1000);
+}
+
+TEST_F_TIMEOUT(test_measure,
+ digest_list_add_del_test_file_upload_measured_chown, UINT_MAX)
+{
+ digest_list_add_del_test_file_upload_measured_common(_metadata,
+ UPLOAD_FILE_CHOWN,
+ -1);
+}
+
+void digest_list_check_measurement_list_test_common(
+ struct __test_metadata *_metadata,
+ enum upload_types upload_type)
+{
+ struct digest_list_item *digest_list;
+ char *measurement_list = NULL;
+ size_t measurement_list_len;
+ char event_digest_name[512];
+ bool entry_found;
+ int ret;
+
+ digest_list = digest_list_generate();
+ ASSERT_NE(NULL, digest_list) {
+ TH_LOG("digest_list_generate() failed");
+ }
+
+ digest_list->actions |= (1 << COMPACT_ACTION_IMA_MEASURED);
+
+ ret = digest_list_upload(digest_list, DIGEST_LIST_ADD, upload_type,
+ 1000);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_upload() failed");
+ }
+
+ ret = digest_list_check(digest_list, DIGEST_LIST_ADD);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_check() failed");
+ }
+
+ ret = read_buffer(IMA_MEASUREMENTS_PATH, &measurement_list,
+ &measurement_list_len, true, true);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("read_buffer() failed");
+ }
+
+ snprintf(event_digest_name, sizeof(event_digest_name),
+ "%s:%s add_%s_digest_list.%s",
+ hash_algo_name[digest_list->algo],
+ digest_list->digest_str,
+ upload_type == UPLOAD_FILE ? "file" : "buffer",
+ digest_list->filename_suffix);
+
+ entry_found = (strstr(measurement_list, event_digest_name) != NULL);
+ free(measurement_list);
+
+ ASSERT_EQ(true, entry_found) {
+ TH_LOG("digest list not found in measurement list");
+ }
+
+ ret = digest_list_upload(digest_list, DIGEST_LIST_DEL, upload_type, -1);
+ ASSERT_NE(0, ret) {
+ TH_LOG("digest_list_upload() success unexpected");
+ }
+
+ ret = digest_list_upload(digest_list, DIGEST_LIST_DEL, upload_type,
+ 1000);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_upload() failed");
+ }
+
+ ret = digest_list_check(digest_list, DIGEST_LIST_DEL);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("digest_list_check() failed");
+ }
+
+ measurement_list = NULL;
+
+ ret = read_buffer(IMA_MEASUREMENTS_PATH, &measurement_list,
+ &measurement_list_len, true, true);
+ ASSERT_EQ(0, ret) {
+ TH_LOG("read_buffer() failed");
+ }
+
+ snprintf(event_digest_name, sizeof(event_digest_name),
+ "%s:%s del_%s_digest_list.%s",
+ hash_algo_name[digest_list->algo],
+ digest_list->digest_str,
+ upload_type == UPLOAD_FILE ? "file" : "buffer",
+ digest_list->filename_suffix);
+
+ entry_found = (strstr(measurement_list, event_digest_name) != NULL);
+ free(measurement_list);
+
+ ASSERT_EQ(true, entry_found) {
+ TH_LOG("digest list not found in measurement list");
+ }
+
+ free(digest_list->buf);
+ free(digest_list);
+}
+
+TEST_F_TIMEOUT(test_measure,
+ digest_list_check_measurement_list_test_file_upload, UINT_MAX)
+{
+ digest_list_check_measurement_list_test_common(_metadata, UPLOAD_FILE);
+}
+
+TEST_F_TIMEOUT(test_measure,
+ digest_list_check_measurement_list_test_buffer_upload, UINT_MAX)
+{
+ digest_list_check_measurement_list_test_common(_metadata,
+ UPLOAD_BUFFER);
+}
+
+TEST_HARNESS_MAIN
--
2.25.1

2021-10-28 09:09:40

by Roberto Sassu

[permalink] [raw]
Subject: RE: [PATCH v3 00/13] integrity: Introduce DIGLIM

> From: Roberto Sassu
> Sent: Tuesday, September 14, 2021 6:34 PM
> Status
> ======
>
> This version of the patch set implements the suggestions received for
> version 2. Apart from one patch added for the IMA API and few fixes, there
> are no substantial changes. It has been tested on: x86_64, UML (x86_64),
> s390x (big endian).

Hi everyone

I didn't receive comments on this version. I believe it is ready to be
accepted, as I addressed the comments for the previous versions
and the tests I wrote are sufficiently complete to find possible
problems.

Is there anything I could do to increase the chances of acceptance?
Would moving DIGLIM to security/ instead of security/integrity/
make it more suitable for inclusion?

Thanks

Roberto

HUAWEI TECHNOLOGIES Duesseldorf GmbH, HRB 56063
Managing Director: Li Peng, Zhong Ronghua

> The long term goal is to boot a system with appraisal enabled and with
> DIGLIM as repository for reference values, taken from the RPM database.
>
> Changes required:
> - new execution policies in IMA
> (https://lore.kernel.org/linux-integrity/20210409114313.4073-1-
> [email protected]/)
> - support for the euid policy keyword for critical data
> (https://lore.kernel.org/linux-integrity/20210705115650.3373599-1-
> [email protected]/)
> - basic DIGLIM
> (this patch set)
> - additional DIGLIM features (loader, LSM, user space utilities)
> - support for DIGLIM in IMA
> - support for PGP keys and signatures
> (from David Howells)
> - support for PGP appended signatures in IMA
>
>
> Introduction
> ============
>
> Digest Lists Integrity Module (DIGLIM) is a component of the integrity
> subsystem in the kernel, primarily aiming to aid Integrity Measurement
> Architecture (IMA) in the process of checking the integrity of file
> content and metadata. It accomplishes this task by storing reference
> values coming from software vendors and by reporting whether or not the
> digest of file content or metadata calculated by IMA (or EVM) is found
> among those values. In this way, IMA can decide, depending on the result
> of a query, if a measurement should be taken or access to the file
> should be granted. The Security Assumptions section explains more in
> detail why this component has been placed in the kernel.
>
> The main benefits of using IMA in conjunction with DIGLIM are the
> ability to implement advanced remote attestation schemes based on the
> usage of a TPM key for establishing a TLS secure channel[1][2], and to
> reduce the burden on Linux distribution vendors to extend secure boot at
> OS level to applications.
>
> DIGLIM does not have the complexity of feature-rich databases. In fact,
> its main functionality comes from the hash table primitives already in
> the kernel. It does not have an ad-hoc storage module, it just indexes
> data in a fixed format (digest lists, a set of concatenated digests
> preceded by a header), copied to kernel memory as they are. Lastly, it
> does not support database-oriented languages such as SQL, but only
> accepts a digest and its algorithm as a query.
>
> The only digest list format supported by DIGLIM is called compact.
> However, Linux distribution vendors don't have to generate new digest
> lists in this format for the packages they release, as already available
> information, such as RPM headers and DEB package metadata, can be used
> as a source for reference values (they include file digests), with a
> user space parser taking care of the conversion to the compact format.
>
> Although one might perceive that storing file or metadata digests for a
> Linux distribution would significantly increase the memory usage, this
> does not seem to be the case. As an anticipation of the evaluation done
> in the Preliminary Performance Evaluation section, protecting binaries
> and shared libraries of a minimal Fedora 33 installation requires 208K
> of memory for the digest lists plus 556K for indexing.
>
> In exchange for a slightly increased memory usage, DIGLIM improves the
> performance of the integrity subsystem. In the considered scenario, IMA
> measurement and appraisal of 5896 files with digest lists requires
> respectively less than one quarter and less than half the time, compared
> to the current solution.
>
> DIGLIM also keeps track of whether digest lists have been processed in
> some way (e.g. measured or appraised by IMA). This is important for
> example for remote attestation, so that remote verifiers understand what
> has been uploaded to the kernel.
>
> Operations in DIGLIM are atomic: if an error occurs during the addition
> of a digest list, DIGLIM rolls back the entire insert operation;
> deletions instead always succeed. This capability has been tested with
> an ad-hoc fault injection mechanism capable of simulating failures
> during the operations.
>
> Finally, DIGLIM exposes to user space, through securityfs, the digest
> lists currently loaded, the number of digests added, a query interface
> and an interface to set digest list labels.
>
>
> Binary Integrity
>
> Integrity is a fundamental security property in information systems.
> Integrity could be described as the condition in which a generic
> component is just after it has been released by the entity that created
> it.
>
> One way to check whether a component is in this condition (called binary
> integrity) is to calculate its digest and to compare it with a reference
> value (i.e. the digest calculated in controlled conditions, when the
> component is released).
>
> IMA, a software part of the integrity subsystem, can perform such
> evaluation and execute different actions:
>
> - store the digest in an integrity-protected measurement list, so that
> it can be sent to a remote verifier for analysis;
> - compare the calculated digest with a reference value (usually
> protected with a signature) and deny operations if the file is found
> corrupted;
> - store the digest in the system log.
>
>
> Benefits
>
> DIGLIM further enhances the capabilities offered by IMA-based solutions
> and, at the same time, makes them more practical to adopt by reusing
> existing sources as reference values for integrity decisions.
>
> Possible sources for digest lists are:
>
> - RPM headers;
> - Debian repository metadata.
>
> Benefits for IMA Measurement
>
> One of the issues that arises when files are measured by the OS is that,
> due to parallel execution, the order in which file accesses happen
> cannot be predicted. Since the TPM Platform Configuration Register (PCR)
> extend operation, executed after each file measurement,
> cryptographically binds the current measurement to the previous ones,
> the PCR value at the end of a workload cannot be predicted too.
>
> Thus, even if the usage of a TPM key, bound to a PCR value, should be
> allowed when only good files were accessed, the TPM could unexpectedly
> deny an operation on that key if files accesses did not happen as stated
> by the key policy (which allows only one of the possible sequences).
>
> DIGLIM solves this issue by making the PCR value stable over the time
> and not dependent on file accesses. The following figure depicts the
> current and the new approaches:
>
> IMA measurement list (current)
>
> entry# 1st boot 2nd boot 3rd boot
> +----+---------------+ +----+---------------+ +----+---------------+
> 1: | 10 | file1 measur. | | 10 | file3 measur. | | 10 | file2 measur. |
> +----+---------------+ +----+---------------+ +----+---------------+
> 2: | 10 | file2 measur. | | 10 | file2 measur. | | 10 | file3 measur. |
> +----+---------------+ +----+---------------+ +----+---------------+
> 3: | 10 | file3 measur. | | 10 | file1 measur. | | 10 | file4 measur. |
> +----+---------------+ +----+---------------+ +----+---------------+
>
> PCR: Extend != Extend != Extend
> file1, file2, file3 file3, file2, file1 file2, file3, file4
>
>
> PCR Extend definition:
>
> PCR(new value) = Hash(Hash(meas. entry), PCR(previous value))
>
> A new entry in the measurement list is created by IMA for each file
> access. Assuming that file1, file2 and file3 are files provided by the
> software vendor, file4 is an unknown file, the first two PCR values
> above represent a good system state, the third a bad system state. The
> PCR values are the result of the PCR extend operation performed for each
> measurement entry with the digest of the measurement entry as an input.
>
> IMA measurement list (with DIGLIM)
>
> dlist
> +--------------+
> | header |
> +--------------+
> | file1 digest |
> | file2 digest |
> | file3 digest |
> +--------------+
>
> dlist is a digest list containing the digest of file1, file2 and file3.
> In the intended scenario, it is generated by a software vendor at the
> end of the building process, and retrieved by the administrator of the
> system where the digest list is loaded.
>
> entry# 1st boot 2nd boot 3rd boot
> +----+---------------+ +----+---------------+ +----+---------------+
> 0: | 11 | dlist measur. | | 11 | dlist measur. | | 11 | dlist measur. |
> +----+---------------+ +----+---------------+ +----+---------------+
> 1: < file1 measur. skip > < file3 measur. skip > < file2 measur. skip >
>
> 2: < file2 measur. skip > < file2 measur. skip > < file3 measur. skip >
> +----+---------------+
> 3: < file3 measur. skip > < file1 measur. skip > | 11 | file4 measur. |
> +----+---------------+
>
> PCR: Extend = Extend != Extend
> dlist dlist dlist, file4
>
> The first entry in the measurement list contains the digest of the
> digest list uploaded to the kernel at kernel initialization time.
>
> When a file is accessed, IMA queries DIGLIM with the calculated file
> digest and, if it is found, IMA skips the measurement.
>
> Thus, the only information sent to remote verifiers are: the list of
> files that could possibly be accessed (from the digest list), but not if
> they were accessed and when; the measurement of unknown files.
>
> Despite providing less information, this solution has the advantage that
> the good system state (i.e. when only file1, file2 and file3 are
> accessed) now can be represented with a deterministic PCR value (the PCR
> is extended only with the measurement of the digest list). Also, the bad
> system state can still be distinguished from the good state (the PCR is
> extended also with the measurement of file4).
>
> If a TPM key is bound to the good PCR value, the TPM would allow the key
> to be used if file1, file2 or file3 are accessed, regardless of the
> sequence in which they are accessed (the PCR value does not change), and
> would revoke the permission when the unknown file4 is accessed (the PCR
> value changes). If a system is able to establish a TLS connection with a
> peer, this implicitly means that the system was in a good state (i.e.
> file4 was not accessed, otherwise the TPM would have denied the usage of
> the TPM key due to the key policy).
>
> Benefits for IMA Appraisal
>
> Extending secure boot to applications means being able to verify the
> provenance of files accessed. IMA does it by verifying file signatures
> with a key that it trusts, which requires Linux distribution vendors to
> additionally include in the package header a signature for each file
> that must be verified (there is the dedicated RPMTAG_FILESIGNATURES
> section in the RPM header).
>
> The proposed approach would be instead to verify data provenance from
> already available metadata (file digests) in existing packages. IMA
> would verify the signature of package metadata and search file digests
> extracted from package metadata and added to the hash table in the
> kernel.
>
> For RPMs, file digests can be found in the RPMTAG_FILEDIGESTS section of
> RPMTAG_IMMUTABLE, whose signature is in RPMTAG_RSAHEADER. For DEBs,
> file
> digests (unsafe to use due to a weak digest algorithm) can be found in
> the md5sum file, which can be indirectly verified from Release.gpg.
>
> The following figure highlights the differences between the current and
> the proposed approach.
>
> IMA appraisal (current solution, with file signatures):
>
> appraise
> +-----------+
> V |
> +-------------------------+-----+ +-------+-----+ |
> | RPM header | | ima rpm | file1 | sig | |
> | ... | | plugin +-------+-----+ +-----+
> | file1 sig [to be added] | sig |--------> ... | IMA |
> | ... | | +-------+-----+ +-----+
> | fileN sig [to be added] | | | fileN | sig |
> +-------------------------+-----+ +-------+-----+
>
> In this case, file signatures must be added to the RPM header, so that
> the ima rpm plugin can extract them together with the file content. The
> RPM header signature is not used.
>
> IMA appraisal (with DIGLIM):
>
> kernel hash table
> with RPM header content
> +---+ +--------------+
> | |--->| file1 digest |
> +---+ +--------------+
> ...
> +---+ appraise (file1)
> | | <--------------+
> +----------------+-----+ +---+ |
> | RPM header | | ^ |
> | ... | | digest_list | |
> | file1 digest | sig | rpm plugin | +-------+ +-----+
> | ... | |-------------+--->| file1 | | IMA |
> | fileN digest | | +-------+ +-----+
> +----------------+-----+ |
> ^ |
> +------------------------------------+
> appraise (RPM header)
>
> In this case, the RPM header is used as it is, and its signature is used
> for IMA appraisal. Then, the digest_list rpm plugin executes the user
> space parser to parse the RPM header and add the extracted digests to an
> hash table in the kernel. IMA appraisal of the files in the RPM package
> consists in searching their digest in the hash table.
>
> Other than reusing available information as digest list, another
> advantage is the lower computational overhead compared to the solution
> with file signatures (only one signature verification for many files and
> digest lookup, instead of per file signature verification, see
> Preliminary Performance Evaluation for more details).
>
>
> Lifecycle
>
> The lifecycle of DIGLIM is represented in the following figure:
>
> Vendor premises (release process with modifications):
>
> +------------+ +-----------------------+ +------------------------+
> | 1. build a | | 2. generate and sign | | 3. publish the package |
> | package |-->| a digest list from |-->| and digest list in |
> | | | packaged files | | a repository |
> +------------+ +-----------------------+ +------------------------+
> |
> |
> User premises: |
> V
> +---------------------+ +------------------------+ +-----------------+
> | 6. use digest lists | | 5. download the digest | | 4. download and |
> | for measurement |<--| list and upload to |<--| install the |
> | and/or appraisal | | the kernel | | package |
> +---------------------+ +------------------------+ +-----------------+
>
> The figure above represents all the steps when a digest list is
> generated separately. However, as mentioned in Benefits, in most cases
> existing packages can be already used as a source for digest lists,
> limiting the effort for software vendors.
>
> If, for example, RPMs are used as a source for digest lists, the figure
> above becomes:
>
> Vendor premises (release process without modifications):
>
> +------------+ +------------------------+
> | 1. build a | | 2. publish the package |
> | package |-->| in a repository |---------------------+
> | | | | |
> +------------+ +------------------------+ |
> |
> |
> User premises: |
> V
> +---------------------+ +------------------------+ +-----------------+
> | 5. use digest lists | | 4. extract digest list | | 3. download and |
> | for measurement |<--| from the package |<--| install the |
> | and/or appraisal | | and upload to the | | package |
> | | | kernel | | |
> +---------------------+ +------------------------+ +-----------------+
>
> Step 4 can be performed with the digest_list rpm plugin and the user
> space parser, without changes to rpm itself.
>
>
> Security Assumptions
>
> As mentioned in the Introduction, DIGLIM will be primarily used in
> conjunction with IMA to enforce a mandatory policy on all user space
> processes, including those owned by root. Even root, in a system with a
> locked-down kernel, cannot affect the enforcement of the mandatory
> policy or, if changes are permitted, it cannot do so without being
> detected.
>
> Given that the target of the enforcement are user space processes,
> DIGLIM cannot be placed in the target, as a Mandatory Access Control
> (MAC) design is required to have the components responsible to enforce
> the mandatory policy separated from the target.
>
> While locking-down a system and limiting actions with a mandatory policy
> is generally perceived by users as an obstacle, it has noteworthy
> benefits for the users themselves.
>
> First, it would timely block attempts by malicious software to steal or
> misuse user assets. Although users could query the package managers to
> detect them, detection would happen after the fact, or it wouldn't
> happen at all if the malicious software tampered with package managers.
> With a mandatory policy enforced by the kernel, users would still be
> able to decide which software they want to be executed except that,
> unlike package managers, the kernel is not affected by user space
> processes or root.
>
> Second, it might make systems more easily verifiable from outside, due
> to the limited actions the system allows. When users connect to a
> server, not only they would be able to verify the server identity, which
> is already possible with communication protocols like TLS, but also if
> the software running on that server can be trusted to handle their
> sensitive data.
>
>
> Adoption
>
> A former version of DIGLIM is used in the following OSes:
>
> - openEuler 20.09
> https://github.com/openeuler-mirror/kernel/tree/openEuler-20.09
> - openEuler 21.03
> https://github.com/openeuler-mirror/kernel/tree/openEuler-21.03
>
> Originally, DIGLIM was part of IMA (known as IMA Digest Lists). In this
> version, it has been redesigned as a standalone module with an API that
> makes its functionality accessible by IMA and, eventually, other
> subsystems.
>
>
> User Space Support
>
> Digest lists can be generated and managed with digest-list-tools:
>
> https://github.com/openeuler-mirror/digest-list-tools
>
> It includes two main applications:
>
> - gen_digest_lists: generates digest lists from files in the
> filesystem or from the RPM database (more digest list sources can be
> supported);
> - manage_digest_lists: converts and uploads digest lists to the
> kernel.
>
> Integration with rpm is done with the digest_list plugin:
>
> https://gitee.com/src-openeuler/rpm/blob/master/Add-digest-list-plugin.patch
>
> This plugin writes the RPM header and its signature to a file, so that
> the file is ready to be appraised by IMA, and calls the user space
> parser to convert and upload the digest list to the kernel.
>
>
> Simple Usage Example (Tested with Fedora 33)
>
> 1. Digest list generation (RPM headers and their signature are copied
> to the specified directory):
>
> # mkdir /etc/digest_lists
> # gen_digest_lists -t file -f rpm+db -d /etc/digest_lists -o add
>
> 2. Digest list upload with the user space parser:
>
> # manage_digest_lists -p add-digest -d /etc/digest_lists
>
> 3. First digest list query:
>
> # echo sha256-$(sha256sum /bin/cat) >
> /sys/kernel/security/integrity/diglim/digest_query
> # cat /sys/kernel/security/integrity/diglim/digest_query
> sha256-[...]-0-file_list-rpm-coreutils-8.32-18.fc33.x86_64 (actions: 0):
> version: 1, algo: sha256, type: 2, modifiers: 1, count: 106, datalen: 3392
>
> 4. Second digest list query:
>
> # echo sha256-$(sha256sum /bin/zip) >
> /sys/kernel/security/integrity/diglim/digest_query
> # cat /sys/kernel/security/integrity/diglim/digest_query
> sha256-[...]-0-file_list-rpm-zip-3.0-27.fc33.x86_64 (actions: 0): version: 1,
> algo: sha256, type: 2, modifiers: 1, count: 4, datalen: 128
>
>
> Preliminary Performance Evaluation
>
> This section provides an initial estimation of the overhead introduced
> by DIGLIM. The estimation has been performed on a Fedora 33 virtual
> machine with 1447 packages installed. The virtual machine has 16 vCPU
> (host CPU: AMD Ryzen Threadripper PRO 3955WX 16-Cores) and 2G of RAM
> (host memory: 64G). The virtual machine also has a vTPM with libtpms and
> swtpm as backend.
>
> After writing the RPM headers to files, the size of the directory
> containing them is 36M.
>
> After converting the RPM headers to the compact digest list, the size of
> the data being uploaded to the kernel is 3.6M.
>
> The time to load the entire RPM database is 0.628s.
>
> After loading the digest lists to the kernel, the slab usage due to
> indexing is (obtained with slab_nomerge in the kernel command line):
>
> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
> 118144 118144 100% 0,03K 923 128 3692K
> digest_list_item_ref_cache
> 102400 102400 100% 0,03K 800 128 3200K digest_item_cache
> 2646 2646 100% 0,09K 63 42 252K digest_list_item_cache
>
> The stats, obtained from the digests_count interface, introduced later,
> are:
>
> Parser digests: 0
> File digests: 99100
> Metadata digests: 0
> Digest list digests: 1423
>
> On this installation, this would be the worst case in which all files
> are measured and/or appraised, which is currently not recommended
> without enforcing an integrity policy protecting mutable files. Infoflow
> LSM is a component to accomplish this task:
>
> https://patchwork.kernel.org/project/linux-
> integrity/cover/[email protected]/
>
> The first manageable goal of IMA with DIGLIM is to use an execution
> policy, with measurement and/or appraisal of files executed or mapped in
> memory as executable (in addition to kernel modules and firmware). In
> this case, the digest list contains the digest only for those files. The
> numbers above change as follows.
>
> After converting the RPM headers to the compact digest list, the size of
> the data being uploaded to the kernel is 208K.
>
> The time to load the digest of binaries and shared libraries is 0.062s.
>
> After loading the digest lists to the kernel, the slab usage due to
> indexing is:
>
> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
> 7168 7168 100% 0,03K 56 128 224K digest_list_item_ref_cache
> 7168 7168 100% 0,03K 56 128 224K digest_item_cache
> 1134 1134 100% 0,09K 27 42 108K digest_list_item_cache
>
> The stats, obtained from the digests_count interface, are:
>
> Parser digests: 0
> File digests: 5986
> Metadata digests: 0
> Digest list digests: 1104
>
> Comparison with IMA
>
> This section compares the performance between the current solution for
> IMA measurement and appraisal, and IMA with DIGLIM.
>
> Workload A (without DIGLIM):
>
> 1. cat file[0-5985] > /dev/null
>
> Workload B (with DIGLIM):
>
> 1. echo $PWD/0-file_list-compact-file[0-1103] >
> <securityfs>/integrity/diglim/digest_list_add
> 2. cat file[0-5985] > /dev/null
>
> Workload A execution time without IMA policy:
>
> real 0m0,155s
> user 0m0,008s
> sys 0m0,066s
>
> Measurement
>
> IMA policy:
>
> measure fowner=2000 func=FILE_CHECK mask=MAY_READ use_diglim=allow
> pcr=11 ima_template=ima-sig
>
> use_diglim is a policy keyword not yet supported by IMA.
>
> Workload A execution time with IMA and 5986 files with signature
> measured:
>
> real 0m8,273s
> user 0m0,008s
> sys 0m2,537s
>
> Workload B execution time with IMA, 1104 digest lists with signature
> measured and uploaded to the kernel, and 5986 files with signature
> accessed but not measured (due to the file digest being found in the
> hash table):
>
> real 0m1,837s
> user 0m0,036s
> sys 0m0,583s
>
> Appraisal
>
> IMA policy:
>
> appraise fowner=2000 func=FILE_CHECK mask=MAY_READ use_diglim=allow
>
> use_diglim is a policy keyword not yet supported by IMA.
>
> Workload A execution time with IMA and 5986 files with file signature
> appraised:
>
> real 0m2,197s
> user 0m0,011s
> sys 0m2,022s
>
> Workload B execution time with IMA, 1104 digest lists with signature
> appraised and uploaded to the kernel, and with 5986 files with signature
> not verified (due to the file digest being found in the hash table):
>
> real 0m0,982s
> user 0m0,020s
> sys 0m0,865s
>
> [1] LSS EU 2019 slides and video
>
> [2] FutureTPM EU project, final review meeting demo slides and video
>
> v2:
> - fix documentation content and style issues (suggested by Mauro)
> - fix basic definitions description and ensure that the _reserved field of
> compact list headers is zero (suggested by Greg KH)
> - document the static inline functions to access compact list data
> (suggested by Mauro)
> - rename htable global variable to diglim_htable (suggested by Mauro)
> - add IMA API to retrieve integrity information about a file or buffer
> - display the digest list in the original format (same endianness as when
> it was uploaded)
> - support digest lists with appended signature (for IMA appraisal)
> - fix bugs in the tests
> - allocate the digest list label in digest_list_add()
> - rename digest_label interface to digest_list_label
> - check input for digest_query and digest_list_label interfaces
> - don't remove entries in digest_lists_loaded if the same digest list is
> uploaded again to the kernel
> - deny write access to the digest lists while IMA actions are retrieved
> - add new test digest_list_add_del_test_file_upload_measured_chown
> - remove unused COMPACT_KEY type
>
> v1:
> - remove 'ima: Add digest, algo, measured parameters to
> ima_measure_critical_data()', replaced by:
> https://lore.kernel.org/linux-integrity/20210705090922.3321178-1-
> [email protected]/
> - add 'Lifecycle' subsection to better clarify how digest lists are
> generated and used (suggested by Greg KH)
> - remove 'Possible Usages' subsection and add 'Benefits for IMA
> Measurement' and 'Benefits for IMA Appraisal' subsubsections
> - add 'Preliminary Performance Evaluation' subsection
> - declare digest_offset and hdr_offset in the digest_list_item_ref
> structure as u32 (sufficient for digest lists of 4G) to make room for a
> list_head structure (digest_list_item_ref size: 32)
> - implement digest list reference management with a linked list instead of
> an array
> - reorder structure members for better alignment (suggested by Mauro)
> - rename digest_lookup() to __digest_lookup() (suggested by Mauro)
> - introduce an object cache for each defined structure
> - replace atomic_long_t with unsigned long in h_table structure definition
> (suggested by Greg KH)
> - remove GPL2 license text and file names (suggested by Greg KH)
> - ensure that the _reserved field of compact_list_hdr is equal to zero
> (suggested by Greg KH)
> - dynamically allocate the buffer in digest_lists_show_htable_len() to
> avoid frame size warning (reported by kernel test robot, dynamic
> allocation suggested by Mauro)
> - split documentation in multiple files and reference the source code
> (suggested by Mauro)
> - use #ifdef in include/linux/diglim.h
> - improve generation of event name for IMA measurements
> - add new patch to introduce the 'Remote Attestation' section in the
> documentation
> - fix assignment of actions variable in digest_list_read() and
> digest_list_write()
> - always release dentry reference when digest_list_get_secfs_files() is
> called
> - rewrite add/del and query interfaces to take advantage of m->private
> - prevent deletion of a digest list only if there are actions done at
> addition time that are not currently being performed
> - fix doc warnings (replace Returns with Return:)
> - perform queries of digest list digests in the existing tests
> - add new tests: digest_list_add_del_test_file_upload_measured,
> digest_list_check_measurement_list_test_file_upload and
> digest_list_check_measurement_list_test_buffer_upload
> - don't return a value from digest_del(), digest_list_ref_del, and
> digest_list_del()
> - improve Makefile for tests
>
> Roberto Sassu (13):
> diglim: Overview
> diglim: Basic definitions
> diglim: Objects
> diglim: Methods
> diglim: Parser
> diglim: IMA info
> diglim: Interfaces - digest_list_add, digest_list_del
> diglim: Interfaces - digest_lists_loaded
> diglim: Interfaces - digest_list_label
> diglim: Interfaces - digest_query
> diglim: Interfaces - digests_count
> diglim: Remote Attestation
> diglim: Tests
>
> .../security/diglim/architecture.rst | 46 +
> .../security/diglim/implementation.rst | 228 +++
> Documentation/security/diglim/index.rst | 14 +
> .../security/diglim/introduction.rst | 599 +++++++
> .../security/diglim/remote_attestation.rst | 87 +
> Documentation/security/diglim/tests.rst | 70 +
> Documentation/security/index.rst | 1 +
> MAINTAINERS | 20 +
> include/linux/diglim.h | 28 +
> include/linux/kernel_read_file.h | 1 +
> include/uapi/linux/diglim.h | 51 +
> security/integrity/Kconfig | 1 +
> security/integrity/Makefile | 1 +
> security/integrity/diglim/Kconfig | 11 +
> security/integrity/diglim/Makefile | 8 +
> security/integrity/diglim/diglim.h | 232 +++
> security/integrity/diglim/fs.c | 865 ++++++++++
> security/integrity/diglim/ima.c | 122 ++
> security/integrity/diglim/methods.c | 513 ++++++
> security/integrity/diglim/parser.c | 274 ++++
> security/integrity/integrity.h | 4 +
> tools/testing/selftests/Makefile | 1 +
> tools/testing/selftests/diglim/Makefile | 19 +
> tools/testing/selftests/diglim/common.c | 135 ++
> tools/testing/selftests/diglim/common.h | 32 +
> tools/testing/selftests/diglim/config | 3 +
> tools/testing/selftests/diglim/selftest.c | 1442 +++++++++++++++++
> 27 files changed, 4808 insertions(+)
> create mode 100644 Documentation/security/diglim/architecture.rst
> create mode 100644 Documentation/security/diglim/implementation.rst
> create mode 100644 Documentation/security/diglim/index.rst
> create mode 100644 Documentation/security/diglim/introduction.rst
> create mode 100644 Documentation/security/diglim/remote_attestation.rst
> create mode 100644 Documentation/security/diglim/tests.rst
> create mode 100644 include/linux/diglim.h
> create mode 100644 include/uapi/linux/diglim.h
> create mode 100644 security/integrity/diglim/Kconfig
> create mode 100644 security/integrity/diglim/Makefile
> create mode 100644 security/integrity/diglim/diglim.h
> create mode 100644 security/integrity/diglim/fs.c
> create mode 100644 security/integrity/diglim/ima.c
> create mode 100644 security/integrity/diglim/methods.c
> create mode 100644 security/integrity/diglim/parser.c
> create mode 100644 tools/testing/selftests/diglim/Makefile
> create mode 100644 tools/testing/selftests/diglim/common.c
> create mode 100644 tools/testing/selftests/diglim/common.h
> create mode 100644 tools/testing/selftests/diglim/config
> create mode 100644 tools/testing/selftests/diglim/selftest.c
>
> --
> 2.25.1