| Age | Commit message (Collapse) | Author | Files | Lines |
|
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
- Fix bug in crypto_skcipher that breaks the new ti driver
- Check for invalid assoclen in essiv
* tag 'v6.18-p3' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: essiv - Check ssize for decryption and in-place encryption
crypto: skcipher - Fix reqsize handling
|
|
Move the ssize check to the start in essiv_aead_crypt so that
it's also checked for decryption and in-place encryption.
Reported-by: Muhammad Alifa Ramdhan <ramdhan@starlabs.sg>
Fixes: be1eb7f78aa8 ("crypto: essiv - create wrapper template for ESSIV generation")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
- Fix zstd regression
- Ensure ti driver algorithm are set as async
- Revert patch disabling SHA1 in FIPS mode
- Fix RNG set_ent null-pointer dereference
* tag 'v6.18-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: rng - Ensure set_ent is always present
Revert "crypto: testmgr - desupport SHA-1 for FIPS 140"
crypto: ti - Add CRYPTO_ALG_ASYNC flag to DTHEv2 AES algos
crypto: zstd - Fix compression bug caused by truncation
|
|
Commit afddce13ce81d ("crypto: api - Add reqsize to crypto_alg")
introduced cra_reqsize field in crypto_alg struct to replace type
specific reqsize fields. It looks like this was introduced specifically
for ahash and acomp from the commit description as subsequent commits
add necessary changes in these alg frameworks.
However, this is being recommended for use in all crypto algs [1]
instead of setting reqsize using crypto_*_set_reqsize(). Using
cra_reqsize in skcipher algorithms, hence, causes memory
corruptions and crashes as the underlying functions in the algorithm
framework have not been updated to set the reqsize properly from
cra_reqsize. [2]
Add proper set_reqsize calls in the skcipher init function to
properly initialize reqsize for these algorithms in the framework.
[1]: https://lore.kernel.org/linux-crypto/aCL8BxpHr5OpT04k@gondor.apana.org.au/
[2]: https://gist.github.com/Pratham-T/24247446f1faf4b7843e4014d5089f6b
Fixes: afddce13ce81d ("crypto: api - Add reqsize to crypto_alg")
Fixes: 52f641bc63a4 ("crypto: ti - Add driver for DTHE V2 AES Engine (ECB, CBC)")
Signed-off-by: T Pratham <t-pratham@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Ensure that set_ent is always set since only drbg provides it.
Fixes: 77ebdabe8de7 ("crypto: af_alg - add extra parameters for DRBG interface")
Reported-by: Yiqi Sun <sunyiqixm@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This reverts commit 9d50a25eeb05c45fef46120f4527885a14c84fb2.
Reported-by: Jiri Slaby <jirislaby@kernel.org>
Reported-by: Jon Kohler <jon@nutanix.com>
Link: https://lore.kernel.org/all/05b7ef65-37bb-4391-9ec9-c382d51bae4d@kernel.org/
Link: https://lore.kernel.org/all/26F8FCC9-B448-4A89-81DF-6BAADA03E174@nutanix.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"Drivers:
- Add ciphertext hiding support to ccp
- Add hashjoin, gather and UDMA data move features to hisilicon
- Add lz4 and lz77_only to hisilicon
- Add xilinx hwrng driver
- Add ti driver with ecb/cbc aes support
- Add ring buffer idle and command queue telemetry for GEN6 in qat
Others:
- Use rcu_dereference_all to stop false alarms in rhashtable
- Fix CPU number wraparound in padata"
* tag 'v6.18-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (78 commits)
dt-bindings: rng: hisi-rng: convert to DT schema
crypto: doc - Add explicit title heading to API docs
hwrng: ks-sa - fix division by zero in ks_sa_rng_init
KEYS: X.509: Fix Basic Constraints CA flag parsing
crypto: anubis - simplify return statement in anubis_mod_init
crypto: hisilicon/qm - set NULL to qm->debug.qm_diff_regs
crypto: hisilicon/qm - clear all VF configurations in the hardware
crypto: hisilicon - enable error reporting again
crypto: hisilicon/qm - mask axi error before memory init
crypto: hisilicon/qm - invalidate queues in use
crypto: qat - Return pointer directly in adf_ctl_alloc_resources
crypto: aspeed - Fix dma_unmap_sg() direction
rhashtable: Use rcu_dereference_all and rcu_dereference_all_check
crypto: comp - Use same definition of context alloc and free ops
crypto: omap - convert from tasklet to BH workqueue
crypto: qat - Replace kzalloc() + copy_from_user() with memdup_user()
crypto: caam - double the entropy delay interval for retry
padata: WQ_PERCPU added to alloc_workqueue users
padata: replace use of system_unbound_wq with system_dfl_wq
crypto: cryptd - WQ_PERCPU added to alloc_workqueue users
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull MM updates from Andrew Morton:
- "mm, swap: improve cluster scan strategy" from Kairui Song improves
performance and reduces the failure rate of swap cluster allocation
- "support large align and nid in Rust allocators" from Vitaly Wool
permits Rust allocators to set NUMA node and large alignment when
perforning slub and vmalloc reallocs
- "mm/damon/vaddr: support stat-purpose DAMOS" from Yueyang Pan extend
DAMOS_STAT's handling of the DAMON operations sets for virtual
address spaces for ops-level DAMOS filters
- "execute PROCMAP_QUERY ioctl under per-vma lock" from Suren
Baghdasaryan reduces mmap_lock contention during reads of
/proc/pid/maps
- "mm/mincore: minor clean up for swap cache checking" from Kairui Song
performs some cleanup in the swap code
- "mm: vm_normal_page*() improvements" from David Hildenbrand provides
code cleanup in the pagemap code
- "add persistent huge zero folio support" from Pankaj Raghav provides
a block layer speedup by optionalls making the
huge_zero_pagepersistent, instead of releasing it when its refcount
falls to zero
- "kho: fixes and cleanups" from Mike Rapoport adds a few touchups to
the recently added Kexec Handover feature
- "mm: make mm->flags a bitmap and 64-bit on all arches" from Lorenzo
Stoakes turns mm_struct.flags into a bitmap. To end the constant
struggle with space shortage on 32-bit conflicting with 64-bit's
needs
- "mm/swapfile.c and swap.h cleanup" from Chris Li cleans up some swap
code
- "selftests/mm: Fix false positives and skip unsupported tests" from
Donet Tom fixes a few things in our selftests code
- "prctl: extend PR_SET_THP_DISABLE to only provide THPs when advised"
from David Hildenbrand "allows individual processes to opt-out of
THP=always into THP=madvise, without affecting other workloads on the
system".
It's a long story - the [1/N] changelog spells out the considerations
- "Add and use memdesc_flags_t" from Matthew Wilcox gets us started on
the memdesc project. Please see
https://kernelnewbies.org/MatthewWilcox/Memdescs and
https://blogs.oracle.com/linux/post/introducing-memdesc
- "Tiny optimization for large read operations" from Chi Zhiling
improves the efficiency of the pagecache read path
- "Better split_huge_page_test result check" from Zi Yan improves our
folio splitting selftest code
- "test that rmap behaves as expected" from Wei Yang adds some rmap
selftests
- "remove write_cache_pages()" from Christoph Hellwig removes that
function and converts its two remaining callers
- "selftests/mm: uffd-stress fixes" from Dev Jain fixes some UFFD
selftests issues
- "introduce kernel file mapped folios" from Boris Burkov introduces
the concept of "kernel file pages". Using these permits btrfs to
account its metadata pages to the root cgroup, rather than to the
cgroups of random inappropriate tasks
- "mm/pageblock: improve readability of some pageblock handling" from
Wei Yang provides some readability improvements to the page allocator
code
- "mm/damon: support ARM32 with LPAE" from SeongJae Park teaches DAMON
to understand arm32 highmem
- "tools: testing: Use existing atomic.h for vma/maple tests" from
Brendan Jackman performs some code cleanups and deduplication under
tools/testing/
- "maple_tree: Fix testing for 32bit compiles" from Liam Howlett fixes
a couple of 32-bit issues in tools/testing/radix-tree.c
- "kasan: unify kasan_enabled() and remove arch-specific
implementations" from Sabyrzhan Tasbolatov moves KASAN arch-specific
initialization code into a common arch-neutral implementation
- "mm: remove zpool" from Johannes Weiner removes zspool - an
indirection layer which now only redirects to a single thing
(zsmalloc)
- "mm: task_stack: Stack handling cleanups" from Pasha Tatashin makes a
couple of cleanups in the fork code
- "mm: remove nth_page()" from David Hildenbrand makes rather a lot of
adjustments at various nth_page() callsites, eventually permitting
the removal of that undesirable helper function
- "introduce kasan.write_only option in hw-tags" from Yeoreum Yun
creates a KASAN read-only mode for ARM, using that architecture's
memory tagging feature. It is felt that a read-only mode KASAN is
suitable for use in production systems rather than debug-only
- "mm: hugetlb: cleanup hugetlb folio allocation" from Kefeng Wang does
some tidying in the hugetlb folio allocation code
- "mm: establish const-correctness for pointer parameters" from Max
Kellermann makes quite a number of the MM API functions more accurate
about the constness of their arguments. This was getting in the way
of subsystems (in this case CEPH) when they attempt to improving
their own const/non-const accuracy
- "Cleanup free_pages() misuse" from Vishal Moola fixes a number of
code sites which were confused over when to use free_pages() vs
__free_pages()
- "Add Rust abstraction for Maple Trees" from Alice Ryhl makes the
mapletree code accessible to Rust. Required by nouveau and by its
forthcoming successor: the new Rust Nova driver
- "selftests/mm: split_huge_page_test: split_pte_mapped_thp
improvements" from David Hildenbrand adds a fix and some cleanups to
the thp selftesting code
- "mm, swap: introduce swap table as swap cache (phase I)" from Chris
Li and Kairui Song is the first step along the path to implementing
"swap tables" - a new approach to swap allocation and state tracking
which is expected to yield speed and space improvements. This
patchset itself yields a 5-20% performance benefit in some situations
- "Some ptdesc cleanups" from Matthew Wilcox utilizes the new memdesc
layer to clean up the ptdesc code a little
- "Fix va_high_addr_switch.sh test failure" from Chunyu Hu fixes some
issues in our 5-level pagetable selftesting code
- "Minor fixes for memory allocation profiling" from Suren Baghdasaryan
addresses a couple of minor issues in relatively new memory
allocation profiling feature
- "Small cleanups" from Matthew Wilcox has a few cleanups in
preparation for more memdesc work
- "mm/damon: add addr_unit for DAMON_LRU_SORT and DAMON_RECLAIM" from
Quanmin Yan makes some changes to DAMON in furtherance of supporting
arm highmem
- "selftests/mm: Add -Wunreachable-code and fix warnings" from Muhammad
Anjum adds that compiler check to selftests code and fixes the
fallout, by removing dead code
- "Improvements to Victim Process Thawing and OOM Reaper Traversal
Order" from zhongjinji makes a number of improvements in the OOM
killer: mainly thawing a more appropriate group of victim threads so
they can release resources
- "mm/damon: misc fixups and improvements for 6.18" from SeongJae Park
is a bunch of small and unrelated fixups for DAMON
- "mm/damon: define and use DAMON initialization check function" from
SeongJae Park implement reliability and maintainability improvements
to a recently-added bug fix
- "mm/damon/stat: expose auto-tuned intervals and non-idle ages" from
SeongJae Park provides additional transparency to userspace clients
of the DAMON_STAT information
- "Expand scope of khugepaged anonymous collapse" from Dev Jain removes
some constraints on khubepaged's collapsing of anon VMAs. It also
increases the success rate of MADV_COLLAPSE against an anon vma
- "mm: do not assume file == vma->vm_file in compat_vma_mmap_prepare()"
from Lorenzo Stoakes moves us further towards removal of
file_operations.mmap(). This patchset concentrates upon clearing up
the treatment of stacked filesystems
- "mm: Improve mlock tracking for large folios" from Kiryl Shutsemau
provides some fixes and improvements to mlock's tracking of large
folios. /proc/meminfo's "Mlocked" field became more accurate
- "mm/ksm: Fix incorrect accounting of KSM counters during fork" from
Donet Tom fixes several user-visible KSM stats inaccuracies across
forks and adds selftest code to verify these counters
- "mm_slot: fix the usage of mm_slot_entry" from Wei Yang addresses
some potential but presently benign issues in KSM's mm_slot handling
* tag 'mm-stable-2025-10-01-19-00' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (372 commits)
mm: swap: check for stable address space before operating on the VMA
mm: convert folio_page() back to a macro
mm/khugepaged: use start_addr/addr for improved readability
hugetlbfs: skip VMAs without shareable locks in hugetlb_vmdelete_list
alloc_tag: fix boot failure due to NULL pointer dereference
mm: silence data-race in update_hiwater_rss
mm/memory-failure: don't select MEMORY_ISOLATION
mm/khugepaged: remove definition of struct khugepaged_mm_slot
mm/ksm: get mm_slot by mm_slot_entry() when slot is !NULL
hugetlb: increase number of reserving hugepages via cmdline
selftests/mm: add fork inheritance test for ksm_merging_pages counter
mm/ksm: fix incorrect KSM counter handling in mm_struct during fork
drivers/base/node: fix double free in register_one_node()
mm: remove PMD alignment constraint in execmem_vmalloc()
mm/memory_hotplug: fix typo 'esecially' -> 'especially'
mm/rmap: improve mlock tracking for large folios
mm/filemap: map entire large folio faultaround
mm/fault: try to map the entire file folio in finish_fault()
mm/rmap: mlock large folios in try_to_unmap_one()
mm/rmap: fix a mlock race condition in folio_referenced_one()
...
|
|
Use size_t for the return value of zstd_compress_cctx as otherwise
negative errors will be truncated to a positive value.
Reported-by: Han Xu <han.xu@nxp.com>
Fixes: f5ad93ffb541 ("crypto: zstd - convert to acomp")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: David Sterba <dsterba@suse.com>
Tested-by: Han Xu <han.xu@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Pull bpf updates from Alexei Starovoitov:
- Support pulling non-linear xdp data with bpf_xdp_pull_data() kfunc
(Amery Hung)
Applied as a stable branch in bpf-next and net-next trees.
- Support reading skb metadata via bpf_dynptr (Jakub Sitnicki)
Also a stable branch in bpf-next and net-next trees.
- Enforce expected_attach_type for tailcall compatibility (Daniel
Borkmann)
- Replace path-sensitive with path-insensitive live stack analysis in
the verifier (Eduard Zingerman)
This is a significant change in the verification logic. More details,
motivation, long term plans are in the cover letter/merge commit.
- Support signed BPF programs (KP Singh)
This is another major feature that took years to materialize.
Algorithm details are in the cover letter/marge commit
- Add support for may_goto instruction to s390 JIT (Ilya Leoshkevich)
- Add support for may_goto instruction to arm64 JIT (Puranjay Mohan)
- Fix USDT SIB argument handling in libbpf (Jiawei Zhao)
- Allow uprobe-bpf program to change context registers (Jiri Olsa)
- Support signed loads from BPF arena (Kumar Kartikeya Dwivedi and
Puranjay Mohan)
- Allow access to union arguments in tracing programs (Leon Hwang)
- Optimize rcu_read_lock() + migrate_disable() combination where it's
used in BPF subsystem (Menglong Dong)
- Introduce bpf_task_work_schedule*() kfuncs to schedule deferred
execution of BPF callback in the context of a specific task using the
kernel’s task_work infrastructure (Mykyta Yatsenko)
- Enforce RCU protection for KF_RCU_PROTECTED kfuncs (Kumar Kartikeya
Dwivedi)
- Add stress test for rqspinlock in NMI (Kumar Kartikeya Dwivedi)
- Improve the precision of tnum multiplier verifier operation
(Nandakumar Edamana)
- Use tnums to improve is_branch_taken() logic (Paul Chaignon)
- Add support for atomic operations in arena in riscv JIT (Pu Lehui)
- Report arena faults to BPF error stream (Puranjay Mohan)
- Search for tracefs at /sys/kernel/tracing first in bpftool (Quentin
Monnet)
- Add bpf_strcasecmp() kfunc (Rong Tao)
- Support lookup_and_delete_elem command in BPF_MAP_STACK_TRACE (Tao
Chen)
* tag 'bpf-next-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (197 commits)
libbpf: Replace AF_ALG with open coded SHA-256
selftests/bpf: Add stress test for rqspinlock in NMI
selftests/bpf: Add test case for different expected_attach_type
bpf: Enforce expected_attach_type for tailcall compatibility
bpftool: Remove duplicate string.h header
bpf: Remove duplicate crypto/sha2.h header
libbpf: Fix error when st-prefix_ops and ops from differ btf
selftests/bpf: Test changing packet data from kfunc
selftests/bpf: Add stacktrace map lookup_and_delete_elem test case
selftests/bpf: Refactor stacktrace_map case with skeleton
bpf: Add lookup_and_delete_elem for BPF_MAP_STACK_TRACE
selftests/bpf: Fix flaky bpf_cookie selftest
selftests/bpf: Test changing packet data from global functions with a kfunc
bpf: Emit struct bpf_xdp_sock type in vmlinux BTF
selftests/bpf: Task_work selftest cleanup fixes
MAINTAINERS: Delete inactive maintainers from AF_XDP
bpf: Mark kfuncs as __noclone
selftests/bpf: Add kprobe multi write ctx attach test
selftests/bpf: Add kprobe write ctx attach test
selftests/bpf: Add uprobe context ip register change test
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux
Pull crypto library updates from Eric Biggers:
- Add a RISC-V optimized implementation of Poly1305. This code was
written by Andy Polyakov and contributed by Zhihang Shao.
- Migrate the MD5 code into lib/crypto/, and add KUnit tests for MD5.
Yes, it's still the 90s, and several kernel subsystems are still
using MD5 for legacy use cases. As long as that remains the case,
it's helpful to clean it up in the same way as I've been doing for
other algorithms.
Later, I plan to convert most of these users of MD5 to use the new
MD5 library API instead of the generic crypto API.
- Simplify the organization of the ChaCha, Poly1305, BLAKE2s, and
Curve25519 code.
Consolidate these into one module per algorithm, and centralize the
configuration and build process. This is the same reorganization that
has already been successful for SHA-1 and SHA-2.
- Remove the unused crypto_kpp API for Curve25519.
- Migrate the BLAKE2s and Curve25519 self-tests to KUnit.
- Always enable the architecture-optimized BLAKE2s code.
* tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: (38 commits)
crypto: md5 - Implement export_core() and import_core()
wireguard: kconfig: simplify crypto kconfig selections
lib/crypto: tests: Enable Curve25519 test when CRYPTO_SELFTESTS
lib/crypto: curve25519: Consolidate into single module
lib/crypto: curve25519: Move a couple functions out-of-line
lib/crypto: tests: Add Curve25519 benchmark
lib/crypto: tests: Migrate Curve25519 self-test to KUnit
crypto: curve25519 - Remove unused kpp support
crypto: testmgr - Remove curve25519 kpp tests
crypto: x86/curve25519 - Remove unused kpp support
crypto: powerpc/curve25519 - Remove unused kpp support
crypto: arm/curve25519 - Remove unused kpp support
crypto: hisilicon/hpre - Remove unused curve25519 kpp support
lib/crypto: tests: Add KUnit tests for BLAKE2s
lib/crypto: blake2s: Consolidate into single C translation unit
lib/crypto: blake2s: Move generic code into blake2s.c
lib/crypto: blake2s: Always enable arch-optimized BLAKE2s code
lib/crypto: blake2s: Remove obsolete self-test
lib/crypto: x86/blake2s: Reduce size of BLAKE2S_SIGMA2
lib/crypto: chacha: Consolidate into single module
...
|
|
Fix the X.509 Basic Constraints CA flag parsing to correctly handle
the ASN.1 DER encoded structure. The parser was incorrectly treating
the length field as the boolean value.
Per RFC 5280 section 4.1, X.509 certificates must use ASN.1 DER encoding.
According to ITU-T X.690, a DER-encoded BOOLEAN is represented as:
Tag (0x01), Length (0x01), Value (0x00 for FALSE, 0xFF for TRUE)
The basicConstraints extension with CA:TRUE is encoded as:
SEQUENCE (0x30) | Length | BOOLEAN (0x01) | Length (0x01) | Value (0xFF)
^-- v[2] ^-- v[3] ^-- v[4]
The parser was checking v[3] (the length field, always 0x01) instead
of v[4] (the actual boolean value, 0xFF for TRUE in DER encoding).
Also handle the case where the extension is an empty SEQUENCE (30 00),
which is valid for CA:FALSE when the default value is omitted as
required by DER encoding rules (X.690 section 11.5).
Per ITU-T X.690-0207:
- Section 11.5: Default values must be omitted in DER
- Section 11.1: DER requires TRUE to be encoded as 0xFF
Link: https://datatracker.ietf.org/doc/html/rfc5280
Link: https://www.itu.int/ITU-T/studygroups/com17/languages/X.690-0207.pdf
Fixes: 30eae2b037af ("KEYS: X.509: Parse Basic Constraints for CA")
Signed-off-by: Fan Wu <wufan@kernel.org>
Reviewed-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch extends the BPF_PROG_LOAD command by adding three new fields
to `union bpf_attr` in the user-space API:
- signature: A pointer to the signature blob.
- signature_size: The size of the signature blob.
- keyring_id: The serial number of a loaded kernel keyring (e.g.,
the user or session keyring) containing the trusted public keys.
When a BPF program is loaded with a signature, the kernel:
1. Retrieves the trusted keyring using the provided `keyring_id`.
2. Verifies the supplied signature against the BPF program's
instruction buffer.
3. If the signature is valid and was generated by a key in the trusted
keyring, the program load proceeds.
4. If no signature is provided, the load proceeds as before, allowing
for backward compatibility. LSMs can chose to restrict unsigned
programs and implement a security policy.
5. If signature verification fails for any reason,
the program is not loaded.
Tested-by: syzbot@syzkaller.appspotmail.com
Signed-off-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/r/20250921160120.9711-2-kpsingh@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
It's no longer required to use nth_page() when iterating pages within a
single SG entry, so let's drop the nth_page() usage.
Link: https://lkml.kernel.org/r/20250901150359.867252-34-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
Return the result of calling crypto_register_alg() directly and remove
the local return variable.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
In commit 42d9f6c77479 ("crypto: acomp - Move scomp stream allocation
code into acomp"), the crypto_acomp_streams struct was made to rely on
having the alloc_ctx and free_ctx operations defined in the same order
as the scomp_alg struct. But in that same commit, the alloc_ctx and
free_ctx members of scomp_alg may be randomized by structure layout
randomization, since they are contained in a pure ops structure
(containing only function pointers). If the pointers within scomp_alg
are randomized, but those in crypto_acomp_streams are not, then
the order may no longer match. This fixes the problem by removing the
union from scomp_alg so that both crypto_acomp_streams and scomp_alg
will share the same definition of alloc_ctx and free_ctx, ensuring
they will always have the same layout.
Signed-off-by: Dan Moulding <dan@danm.net>
Suggested-by: Herbert Xu <herbert@gondor.apana.org.au>
Fixes: 42d9f6c77479 ("crypto: acomp - Move scomp stream allocation code into acomp")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
"This fixes a NULL pointer dereference in ccp and a couple of bugs in
the af_alg interface"
* tag 'v6.17-p3' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: af_alg - Disallow concurrent writes in af_alg_sendmsg
crypto: af_alg - Set merge to zero early in af_alg_sendmsg
crypto: ccp - Always pass in an error pointer to __sev_platform_shutdown_locked()
|
|
Issuing two writes to the same af_alg socket is bogus as the
data will be interleaved in an unpredictable fashion. Furthermore,
concurrent writes may create inconsistencies in the internal
socket state.
Disallow this by adding a new ctx->write field that indiciates
exclusive ownership for writing.
Fixes: 8ff590903d5 ("crypto: algif_skcipher - User-space interface for skcipher operations")
Reported-by: Muhammad Alifa Ramdhan <ramdhan@starlabs.sg>
Reported-by: Bing-Jhong Billy Jheng <billy@starlabs.sg>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
If an error causes af_alg_sendmsg to abort, ctx->merge may contain
a garbage value from the previous loop. This may then trigger a
crash on the next entry into af_alg_sendmsg when it attempts to do
a merge that can't be done.
Fix this by setting ctx->merge to zero near the start of the loop.
Fixes: 8ff590903d5 ("crypto: algif_skcipher - User-space interface for skcipher operations")
Reported-by: Muhammad Alifa Ramdhan <ramdhan@starlabs.sg>
Reported-by: Bing-Jhong Billy Jheng <billy@starlabs.sg>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This patch adds a new WQ_PERCPU flag to explicitly request the use of
the per-CPU behavior. Both flags coexist for one release cycle to allow
callers to transition their calls.
Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.
With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.
All existing users have been updated accordingly.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since commit 9d7a0ab1c753 ("crypto: ahash - Handle partial blocks in
API"), the recently-added export_core() and import_core() methods in
struct shash_alg have effectively become mandatory (even though it is
not tested or enforced), since legacy drivers that need a fallback
depend on them. Make crypto/md5.c compatible with these legacy drivers
by adding export_core() and import_core() methods to it.
Fixes: ba8ee22a7f92 ("crypto: md5 - Wrap library and add HMAC support")
Link: https://lore.kernel.org/r/20250906215417.89584-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
Curve25519 has both a library API and a crypto_kpp API. However, the
crypto_kpp API for Curve25519 had no users outside crypto/testmgr.c.
I.e., no non-test code ever passed "curve25519" to crypto_alloc_kpp().
Remove this unused code. We'll instead focus on the Curve25519 library
API (<crypto/curve25519.h>), which is a simpler and easier-to-use API
and is the API that is actually being used.
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k
Link: https://lore.kernel.org/r/20250906213523.84915-7-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
Curve25519 is used only via the library API, not the crypto_kpp API. In
preparation for removing the unused crypto_kpp API for Curve25519,
remove the tests for the "curve25519" kpp from crypto/testmgr.c.
Note that these tests just duplicated lib/crypto/curve25519-selftest.c,
which uses the same list of test vectors. So they didn't really provide
any additional value.
Link: https://lore.kernel.org/r/20250906213523.84915-6-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
Since commit 9d7a0ab1c753 ("crypto: ahash - Handle partial blocks in
API"), the recently-added export_core() and import_core() methods in
struct shash_alg have effectively become mandatory (even though it is
not tested or enforced), since legacy drivers that need a fallback
depend on them. Make crypto/sha512.c compatible with these legacy
drivers by adding export_core() and import_core() methods to it.
Reported-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reported-by: Ovidiu Panait <ovidiu.panait.oss@gmail.com>
Closes: https://lore.kernel.org/r/aLSnCc9Ws5L9y+8X@gcabiddu-mobl.ger.corp.intel.com
Fixes: 4bc7f7b687a2 ("crypto: sha512 - Use same state format as legacy drivers")
Tested-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Ovidiu Panait <ovidiu.panait.oss@gmail.com>
Link: https://lore.kernel.org/r/20250901165013.48649-4-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
Since commit 9d7a0ab1c753 ("crypto: ahash - Handle partial blocks in
API"), the recently-added export_core() and import_core() methods in
struct shash_alg have effectively become mandatory (even though it is
not tested or enforced), since legacy drivers that need a fallback
depend on them. Make crypto/sha256.c compatible with these legacy
drivers by adding export_core() and import_core() methods to it.
Reported-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reported-by: Ovidiu Panait <ovidiu.panait.oss@gmail.com>
Closes: https://lore.kernel.org/r/aLSnCc9Ws5L9y+8X@gcabiddu-mobl.ger.corp.intel.com
Fixes: 07f090959bba ("crypto: sha256 - Use same state format as legacy drivers")
Tested-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Ovidiu Panait <ovidiu.panait.oss@gmail.com>
Link: https://lore.kernel.org/r/20250901165013.48649-3-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
Since commit 9d7a0ab1c753 ("crypto: ahash - Handle partial blocks in
API"), the recently-added export_core() and import_core() methods in
struct shash_alg have effectively become mandatory (even though it is
not tested or enforced), since legacy drivers that need a fallback
depend on them. Make crypto/sha1.c compatible with these legacy drivers
by adding export_core() and import_core() methods to it.
Reported-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reported-by: Ovidiu Panait <ovidiu.panait.oss@gmail.com>
Closes: https://lore.kernel.org/r/aLSnCc9Ws5L9y+8X@gcabiddu-mobl.ger.corp.intel.com
Fixes: b10a74abcfc5 ("crypto: sha1 - Use same state format as legacy drivers")
Tested-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Ovidiu Panait <ovidiu.panait.oss@gmail.com>
Link: https://lore.kernel.org/r/20250901165013.48649-2-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
This is not a leak! The stack memroy is hashed and fed into the
entropy pool. We can't recover the original kernel memory from it.
Reported-by: syzbot+e8bcd7ee3db6cb5cb875@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=e8bcd7ee3db6cb5cb875
Signed-off-by: Edward Adam Davis <eadavis@qq.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
For the "chacha20", "xchacha20", and "xchacha12" skcipher algorithms,
instead of registering "*-generic" drivers as well as conditionally
registering "*-$(ARCH)" drivers, instead just register "*-lib" drivers.
These just use the regular library functions, so they just do the right
thing and are fully accelerated when supported by the CPU.
This eliminates the need for the ChaCha library to support
chacha_crypt_generic() and hchacha_block_generic() as part of its
external interface. A later commit will make chacha_crypt_generic() a
static function.
Since this commit removes several "*-generic" driver names which
crypto/testmgr.c expects to exist, update testmgr.c accordingly.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250827151131.27733-3-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
Consolidate the Poly1305 code into a single module, similar to various
other algorithms (SHA-1, SHA-256, SHA-512, etc.):
- Each arch now provides a header file lib/crypto/$(SRCARCH)/poly1305.h,
replacing lib/crypto/$(SRCARCH)/poly1305*.c. The header defines
poly1305_block_init(), poly1305_blocks(), poly1305_emit(), and
optionally poly1305_mod_init_arch(). It is included by
lib/crypto/poly1305.c, and thus the code gets built into the single
libpoly1305 module, with improved inlining in some cases.
- Whether arch-optimized Poly1305 is buildable is now controlled
centrally by lib/crypto/Kconfig instead of by
lib/crypto/$(SRCARCH)/Kconfig. The conditions for enabling it remain
the same as before, and it remains enabled by default. (The PPC64 one
remains unconditionally disabled due to 'depends on BROKEN'.)
- Any additional arch-specific translation units for the optimized
Poly1305 code, such as assembly files, are now compiled by
lib/crypto/Makefile instead of lib/crypto/$(SRCARCH)/Makefile.
A special consideration is needed because the Adiantum code uses the
poly1305_core_*() functions directly. For now, just carry forward that
approach. This means retaining the CRYPTO_LIB_POLY1305_GENERIC kconfig
symbol, and keeping the poly1305_core_*() functions in separate
translation units. So it's not quite as streamlined I've done with the
other hash functions, but we still get a single libpoly1305 module.
Note: to see the diff from the arm, arm64, and x86 .c files to the new
.h files, view this commit with 'git show -M10'.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250829152513.92459-3-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
Reimplement crypto/md5.c on top of the new MD5 library functions. Also
add support for HMAC-MD5, again just wrapping the library functions.
This closely mirrors crypto/sha1.c.
Link: https://lore.kernel.org/r/20250805222855.10362-7-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull non-MM updates from Andrew Morton:
"Significant patch series in this pull request:
- "squashfs: Remove page->mapping references" (Matthew Wilcox) gets
us closer to being able to remove page->mapping
- "relayfs: misc changes" (Jason Xing) does some maintenance and
minor feature addition work in relayfs
- "kdump: crashkernel reservation from CMA" (Jiri Bohac) switches
us from static preallocation of the kdump crashkernel's working
memory over to dynamic allocation. So the difficulty of a-priori
estimation of the second kernel's needs is removed and the first
kernel obtains extra memory
- "generalize panic_print's dump function to be used by other
kernel parts" (Feng Tang) implements some consolidation and
rationalization of the various ways in which a failing kernel
splats information at the operator
* tag 'mm-nonmm-stable-2025-08-03-12-47' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (80 commits)
tools/getdelays: add backward compatibility for taskstats version
kho: add test for kexec handover
delaytop: enhance error logging and add PSI feature description
samples: Kconfig: fix spelling mistake "instancess" -> "instances"
fat: fix too many log in fat_chain_add()
scripts/spelling.txt: add notifer||notifier to spelling.txt
xen/xenbus: fix typo "notifer"
net: mvneta: fix typo "notifer"
drm/xe: fix typo "notifer"
cxl: mce: fix typo "notifer"
KVM: x86: fix typo "notifer"
MAINTAINERS: add maintainers for delaytop
ucount: use atomic_long_try_cmpxchg() in atomic_long_inc_below()
ucount: fix atomic_long_inc_below() argument type
kexec: enable CMA based contiguous allocation
stackdepot: make max number of pools boot-time configurable
lib/xxhash: remove unused functions
init/Kconfig: restore CONFIG_BROKEN help text
lib/raid6: update recov_rvv.c zero page usage
docs: update docs after introducing delaytop
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto update from Herbert Xu:
"API:
- Allow hash drivers without fallbacks (e.g., hardware key)
Algorithms:
- Add hmac hardware key support (phmac) on s390
- Re-enable sha384 in FIPS mode
- Disable sha1 in FIPS mode
- Convert zstd to acomp
Drivers:
- Lower priority of qat skcipher and aead
- Convert aspeed to partial block API
- Add iMX8QXP support in caam
- Add rate limiting support for GEN6 devices in qat
- Enable telemetry for GEN6 devices in qat
- Implement full backlog mode for hisilicon/sec2"
* tag 'v6.17-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (116 commits)
crypto: keembay - Use min() to simplify ocs_create_linked_list_from_sg()
crypto: hisilicon/hpre - fix dma unmap sequence
crypto: qat - make adf_dev_autoreset() static
crypto: ccp - reduce stack usage in ccp_run_aes_gcm_cmd
crypto: qat - refactor ring-related debug functions
crypto: qat - fix seq_file position update in adf_ring_next()
crypto: qat - fix DMA direction for compression on GEN2 devices
crypto: jitter - replace ARRAY_SIZE definition with header include
crypto: engine - remove {prepare,unprepare}_crypt_hardware callbacks
crypto: engine - remove request batching support
crypto: qat - flush misc workqueue during device shutdown
crypto: qat - enable rate limiting feature for GEN6 devices
crypto: qat - add compression slice count for rate limiting
crypto: qat - add get_svc_slice_cnt() in device data structure
crypto: qat - add adf_rl_get_num_svc_aes() in rate limiting
crypto: qat - relocate service related functions
crypto: qat - consolidate service enums
crypto: qat - add decompression service for rate limiting
crypto: qat - validate service in rate limiting sysfs api
crypto: hisilicon/sec2 - implement full backlog mode for sec
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux
Pull crypto library updates from Eric Biggers:
"This is the main crypto library pull request for 6.17. The main focus
this cycle is on reorganizing the SHA-1 and SHA-2 code, providing
high-quality library APIs for SHA-1 and SHA-2 including HMAC support,
and establishing conventions for lib/crypto/ going forward:
- Migrate the SHA-1 and SHA-512 code (and also SHA-384 which shares
most of the SHA-512 code) into lib/crypto/. This includes both the
generic and architecture-optimized code. Greatly simplify how the
architecture-optimized code is integrated. Add an easy-to-use
library API for each SHA variant, including HMAC support. Finally,
reimplement the crypto_shash support on top of the library API.
- Apply the same reorganization to the SHA-256 code (and also SHA-224
which shares most of the SHA-256 code). This is a somewhat smaller
change, due to my earlier work on SHA-256. But this brings in all
the same additional improvements that I made for SHA-1 and SHA-512.
There are also some smaller changes:
- Move the architecture-optimized ChaCha, Poly1305, and BLAKE2s code
from arch/$(SRCARCH)/lib/crypto/ to lib/crypto/$(SRCARCH)/. For
these algorithms it's just a move, not a full reorganization yet.
- Fix the MIPS chacha-core.S to build with the clang assembler.
- Fix the Poly1305 functions to work in all contexts.
- Fix a performance regression in the x86_64 Poly1305 code.
- Clean up the x86_64 SHA-NI optimized SHA-1 assembly code.
Note that since the new organization of the SHA code is much simpler,
the diffstat of this pull request is negative, despite the addition of
new fully-documented library APIs for multiple SHA and HMAC-SHA
variants.
These APIs will allow further simplifications across the kernel as
users start using them instead of the old-school crypto API. (I've
already written a lot of such conversion patches, removing over 1000
more lines of code. But most of those will target 6.18 or later)"
* tag 'libcrypto-updates-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: (67 commits)
lib/crypto: arm64/sha512-ce: Drop compatibility macros for older binutils
lib/crypto: x86/sha1-ni: Convert to use rounds macros
lib/crypto: x86/sha1-ni: Minor optimizations and cleanup
crypto: sha1 - Remove sha1_base.h
lib/crypto: x86/sha1: Migrate optimized code into library
lib/crypto: sparc/sha1: Migrate optimized code into library
lib/crypto: s390/sha1: Migrate optimized code into library
lib/crypto: powerpc/sha1: Migrate optimized code into library
lib/crypto: mips/sha1: Migrate optimized code into library
lib/crypto: arm64/sha1: Migrate optimized code into library
lib/crypto: arm/sha1: Migrate optimized code into library
crypto: sha1 - Use same state format as legacy drivers
crypto: sha1 - Wrap library and add HMAC support
lib/crypto: sha1: Add HMAC support
lib/crypto: sha1: Add SHA-1 library functions
lib/crypto: sha1: Rename sha1_init() to sha1_init_raw()
crypto: x86/sha1 - Rename conflicting symbol
lib/crypto: sha2: Add hmac_sha*_init_usingrawkey()
lib/crypto: arm/poly1305: Remove unneeded empty weak function
lib/crypto: x86/poly1305: Fix performance regression on short messages
...
|
|
The ARRAY_SIZE macro is already defined in linux/array_size.h
This patch replaces the ARRAY_SIZE definition in jitterentropy.c with
an include, to make the code cleaner, and help reduce the number of
duplicate ARRAY_SIZE definitions in the codebase.
Signed-off-by: Ruben Wauters <rubenru09@aol.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The {prepare,unprepare}_crypt_hardware callbacks were added back in 2016
by commit 735d37b5424b ("crypto: engine - Introduce the block request
crypto engine framework"), but they were never implemented by any driver.
Remove them as they are unused.
Since the 'engine->idling' and 'was_busy' flags are no longer needed,
remove them as well.
Signed-off-by: Ovidiu Panait <ovidiu.panait.oss@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Remove request batching support from crypto_engine, as there are no
drivers using this feature and it doesn't really work that well.
Instead of doing batching based on backlog, a more optimal approach
would be for the user to handle the batching (similar to how IPsec
can hook into GSO to get 64K of data each time or how block encryption
can use unit sizes much greater than 4K).
Suggested-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Ovidiu Panait <ovidiu.panait.oss@gmail.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Fix a leak reported by kmemleak:
unreferenced object 0xffff8880093bf7a0 (size 32):
comm "swapper/0", pid 1, jiffies 4294877529
hex dump (first 32 bytes):
9d 18 86 16 f6 38 52 fe 86 91 5b b8 40 b4 a8 86 .....8R...[.@...
ff 3e 6b b0 f8 19 b4 9b 89 33 93 d3 93 85 42 95 .>k......3....B.
backtrace (crc 8ba12f3b):
kmemleak_alloc+0x8d/0xa0
__kmalloc_noprof+0x3cd/0x4d0
prep_buf+0x36/0x70
load_buf+0x10d/0x1c0
krb5_test_one_prf+0x1e1/0x3c0
krb5_selftest.cold+0x7c/0x54c
crypto_krb5_init+0xd/0x20
do_one_initcall+0xa5/0x230
do_initcalls+0x213/0x250
kernel_init_freeable+0x220/0x260
kernel_init+0x1d/0x170
ret_from_fork+0x301/0x410
ret_from_fork_asm+0x1a/0x30
Fixes: fc0cf10c04f4 ("crypto/krb5: Implement crypto self-testing")
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Acked-by: David Howells <dhowells@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
To avoid a crash when control flow integrity is enabled, make the
workspace ("stream") free function use a consistent type, and call it
through a function pointer that has that same type.
Fixes: 42d9f6c77479 ("crypto: acomp - Move scomp stream allocation code into acomp")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
cryptd_queue::cryptd_cpu_queue is a per-CPU variable and relies on
disabled BH for its locking. Without per-CPU locking in
local_bh_disable() on PREEMPT_RT this data structure requires explicit
locking.
Add a local_lock_t to the struct cryptd_cpu_queue and use
local_lock_nested_bh() for locking. This change adds only lockdep
coverage and does not alter the functional behaviour for !PREEMPT_RT.
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Same as sha256 and sha512: Use the state format that the generic partial
block handling code produces, as requested by Herbert, even though this
is applicable only to legacy drivers.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250712232329.818226-7-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
Like I did for crypto/sha512.c, rework crypto/sha1_generic.c (renamed to
crypto/sha1.c) to simply wrap the normal library functions instead of
accessing the low-level block function directly. Also add support for
HMAC-SHA1, again just wrapping the library functions.
Since the replacement crypto_shash algorithms are implemented using the
(potentially arch-optimized) library functions, give them driver names
ending with "-lib" rather than "-generic". Update crypto/testmgr.c and
an odd driver to take this change in driver name into account.
Note: to see the diff from crypto/sha1_generic.c to crypto/sha1.c, view
this commit with 'git show -M10'.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250712232329.818226-6-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
Replace the deprecated zero-length array with a modern flexible array
member in the struct zstd_ctx.
No functional changes intended.
Link: https://github.com/KSPP/linux/issues/78
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Reviewed-by: Kees Cook <kees@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Fix the following warnings reported by the static analyzer Smatch:
crypto/zstd.c:273 zstd_decompress()
warn: duplicate check 'scur' (previous on line 235)
Fixes: f5ad93ffb541 ("crypto: zstd - convert to acomp")
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Closes: https://lore.kernel.org/linux-crypto/92929e50-5650-40be-8c0a-de81e77f0acf@sabinyo.mountain/
Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the system-wide zero page instead of a custom zero page.
[herbert@gondor.apana.org.au: update lib/raid6/recov_rvv.c, per Klara]
Link: https://lkml.kernel.org/r/aFkUnXWtxcgOTVkw@gondor.apana.org.au
Link: https://lkml.kernel.org/r/Z9flJNkWQICx0PXk@gondor.apana.org.au
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Song Liu <song@kernel.org>
Cc: Yu Kuai <yukuai3@huawei.com>
Cc: Klara Modin <klarasmodin@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
crypto/hash_info.c just contains a couple of arrays that map HASH_ALGO_*
algorithm IDs to properties of those algorithms. It is compiled only
when CRYPTO_HASH_INFO=y, but currently CRYPTO_HASH_INFO depends on
CRYPTO. Since this can be useful without the old-school crypto API,
move it into lib/crypto/ so that it no longer depends on CRYPTO.
This eliminates the need for FS_VERITY to select CRYPTO after it's been
converted to use lib/crypto/.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250630172224.46909-2-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
The intermediary value was included in the wrong
hash state. While there, adapt to user-space by
setting the timestamp to 0 if stuck and inserting
the values nevertheless.
Acked-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Markus Theil <theil.markus@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Make the export and import functions for the sha224, sha256,
hmac(sha224), and hmac(sha256) shash algorithms use the same format as
the padlock-sha and nx-sha256 drivers, as required by Herbert.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250630160645.3198-11-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
Like I did for crypto/sha512.c, rework crypto/sha256.c to simply wrap
the normal library functions instead of accessing the low-level arch-
optimized and generic block functions directly. Also add support for
HMAC-SHA224 and HMAC-SHA256, again just wrapping the library functions.
Since the replacement crypto_shash algorithms are implemented using the
(potentially arch-optimized) library functions, give them driver names
ending with "-lib" rather than "-generic". Update crypto/testmgr.c and
a couple odd drivers to take this change in driver name into account.
Besides the above cases which are accounted for, there are no known
cases where the driver names were being depended on. There is
potential for confusion for people manually checking /proc/crypto (e.g.
https://lore.kernel.org/r/9e33c893-2466-4d4e-afb1-966334e451a2@linux.ibm.com/),
but really people just need to get used to the driver name not being
meaningful for the software algorithms. Historically, the optimized
code was disabled by default, so there was some purpose to checking
whether it was enabled or not. However, this is now fixed for all SHA-2
algorithms, and the library code just always does the right thing. E.g.
if the CPU supports SHA-256 instructions, they are used.
This change does also mean that the generic partial block handling code
in crypto/shash.c, which got added in 6.16, no longer gets used. But
that's fine; the library has to implement the partial block handling
anyway, and it's better to do it in the library since the block size and
other properties of the algorithm are all fixed at compile time there,
resulting in more streamlined code.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250630160645.3198-10-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
Currently the SHA-224 and SHA-256 library functions can be mixed
arbitrarily, even in ways that are incorrect, for example using
sha224_init() and sha256_final(). This is because they operate on the
same structure, sha256_state.
Introduce stronger typing, as I did for SHA-384 and SHA-512.
Also as I did for SHA-384 and SHA-512, use the names *_ctx instead of
*_state. The *_ctx names have the following small benefits:
- They're shorter.
- They avoid an ambiguity with the compression function state.
- They're consistent with the well-known OpenSSL API.
- Users usually name the variable 'sctx' anyway, which suggests that
*_ctx would be the more natural name for the actual struct.
Therefore: update the SHA-224 and SHA-256 APIs, implementation, and
calling code accordingly.
In the new structs, also strongly-type the compression function state.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250630160645.3198-7-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
For the "crc32" and "crc32c" shash algorithms, instead of registering
"*-generic" drivers as well as conditionally registering "*-$(ARCH)"
drivers, instead just register "*-lib" drivers. These just use the
regular library functions crc32_le() and crc32c(), so they just do the
right thing and are fully accelerated when supported by the CPU.
This eliminates the need for the CRC library to export crc32_le_base()
and crc32c_base(). Separate commits make those static functions.
Since this commit removes the "crc32-generic" and "crc32c-generic"
driver names which crypto/testmgr.c expects to exist, update testmgr.c
accordingly. This does mean that testmgr.c will no longer fuzz-test the
"generic" implementation against the "arch" implementation for crc32 and
crc32c, but this was redundant with crc_kunit anyway.
Besides the above, and btrfs_init_csum_hash() which the previous commit
fixed, no code appears to have been relying on the "crc32-generic" or
"crc32c-generic" driver names specifically.
btrfs does export the checksum name and checksum driver name in
/sys/fs/btrfs/$uuid/checksum. This commit makes the driver name portion
of that file contain "crc32c-lib" instead of "crc32c-generic" or
"crc32c-$(ARCH)". This should be fine, since in practice the purpose of
the driver name portion of this file seems to have been just to allow
users to manually check whether they needed to enable the optimized
CRC32C code. This was needed only because of the bug in old kernels
where the optimized CRC32C code defaulted to off and even needed to be
explicitly added to the ramdisk to be used. Now that it just works in
Linux 6.14 and later, there's no need for users to take any action and
the driver name portion of this is basically obsolete. (Also, note that
the crc32c driver name already changed in 6.14.)
Acked-by: David Sterba <dsterba@suse.com>
Link: https://lore.kernel.org/r/20250613183753.31864-3-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
This is no longer needed now that the code that used to directly access
the descriptor context of "crc32c" (libcrc32c and ext4) now just calls
crc32c(). Keep just the generic hash test.
Link: https://lore.kernel.org/r/20250531205937.63008-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
Make the export and import functions for the sha384, sha512,
hmac(sha384), and hmac(sha512) shash algorithms use the same format as
the padlock-sha and nx-sha512 drivers, as required by Herbert.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250630160320.2888-7-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
Delete crypto/sha512_generic.c, which provided "generic" SHA-384 and
SHA-512 crypto_shash algorithms. Replace it with crypto/sha512.c which
provides SHA-384, SHA-512, HMAC-SHA384, and HMAC-SHA512 crypto_shash
algorithms using the corresponding library functions.
This is a prerequisite for migrating all the arch-optimized SHA-512 code
(which is almost 3000 lines) to lib/crypto/ rather than duplicating it.
Since the replacement crypto_shash algorithms are implemented using the
(potentially arch-optimized) library functions, give them
cra_driver_names ending with "-lib" rather than "-generic". Update
crypto/testmgr.c and one odd driver to take this change in driver name
into account. Besides these cases which are accounted for, there are no
known cases where the cra_driver_name was being depended on.
This change does mean that the abstract partial block handling code in
crypto/shash.c, which got added in 6.16, no longer gets used. But
that's fine; the library has to implement the partial block handling
anyway, and it's better to do it in the library since the block size and
other properties of the algorithm are all fixed at compile time there,
resulting in more streamlined code.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250630160320.2888-6-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
|
|
Add phmac selftest invocation to the crypto testmanager.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Acked-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Make the hash walk functions
crypto_hash_walk_done()
crypto_hash_walk_first()
crypto_hash_walk_last()
public again.
These functions had been removed from the header file
include/crypto/internal/hash.h with commit 7fa481734016
("crypto: ahash - make hash walk functions private to ahash.c")
as there was no crypto algorithm code using them.
With the upcoming crypto implementation for s390 phmac
these functions will be exploited and thus need to be
public within the kernel again.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Acked-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Set .fips_allowed in the following drbg alg_test_desc structs.
drbg_nopr_hmac_sha384
drbg_nopr_sha384
drbg_pr_hmac_sha384
drbg_pr_sha384
The sha384 and hmac_sha384 DRBGs with and without prediction resistance
were disallowed in an early version of the FIPS 140-3 Implementation
Guidance document. Hence, the fips_allowed flag in struct alg_test_desc
pertaining to the affected DRBGs was unset. The IG has been withdrawn
and they are allowed again.
Furthermore, when the DRBGs are configured, /proc/crypto shows that
drbg_*pr_sha384 and drbg_*pr_hmac_sha384 are fips-approved ("fips: yes")
but because their self-tests are not run (a consequence of unsetting
the fips_allowed flag), the drbgs won't load successfully with the seeming
contradictory "fips: yes" in /proc/crypto.
This series contains a single patch that sets the fips_allowed flag in
the sha384-impacted DRBGs, which restores the ability to load them in
FIPS mode.
Link: https://lore.kernel.org/linux-crypto/979f4f6f-bb74-4b93-8cbf-6ed653604f0e@jvdsn.com/
Link: https://csrc.nist.gov/CSRC/media/Projects/cryptographic-module-validation-program/documents/fips%20140-3/FIPS%20140-3%20IG.pdf
To: Herbert Xu <herbert@gondor.apana.org.au>
To: David S. Miller <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Jeff Barnes <jeffbarnes@linux.microsoft.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Convert the implementation to a native acomp interface using zstd
streaming APIs, eliminating the need for buffer linearization.
This includes:
- Removal of the scomp interface in favor of acomp
- Refactoring of stream allocation, initialization, and handling for
both compression and decompression using Zstandard streaming APIs
- Replacement of crypto_register_scomp() with crypto_register_acomp()
for module registration
Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Ensure that drivers that have not been converted to the ahash API
do not use the ahash_request_set_virt fallback path as they cannot
use the software fallback.
Reported-by: Eric Biggers <ebiggers@kernel.org>
Fixes: 9d7a0ab1c753 ("crypto: ahash - Handle partial blocks in API")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Milan Broz <gmazyland@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Invoke the final function directly in the default finup implementation
since crypto_ahash_final is now just a wrapper around finup.
Reported-by: Eric Biggers <ebiggers@kernel.org>
Fixes: 9d7a0ab1c753 ("crypto: ahash - Handle partial blocks in API")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The function opencodes cpumask_nth(). The dedicated helper is faster
than an open for-loop.
Signed-off-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The sunset period of SHA-1 is approaching [1] and FIPS 140 certificates
have a validity of 5 years. Any distros starting FIPS certification for
their kernels now would therefore most likely end up on the NIST
Cryptographic Module Validation Program "historical" list before their
certification expires.
While SHA-1 is technically still allowed until Dec. 31, 2030, it is
heavily discouraged by NIST and it makes sense to set .fips_allowed to
0 now for any crypto algorithms that reference it in order to avoid any
costly surprises down the line.
[1]: https://www.nist.gov/news-events/news/2022/12/nist-retires-sha-1-cryptographic-algorithm
Acked-by: Stephan Mueller <smueller@chronox.de>
Cc: Marcus Meissner <meissner@suse.de>
Cc: Jarod Wilson <jarod@redhat.com>
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: John Haxby <john.haxby@oracle.com>
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Commit 698de822780f ("crypto: testmgr - make it easier to enable the
full set of tests") removed support for building kernels that run only
the "fast" set of crypto self-tests by default. This assumed that
nearly everyone actually wanted the full set of tests, *if* they had
already chosen to enable the tests at all.
Unfortunately, it turns out that both Debian and Fedora intentionally
have the crypto self-tests enabled in their production kernels. And for
production kernels we do need to keep the testing time down, which
implies just running the "fast" tests, not the full set of tests.
For Fedora, a reason for enabling the tests in production is that they
are being (mis)used to meet the FIPS 140-3 pre-operational testing
requirement.
However, the other reason for enabling the tests in production, which
applies to both distros, is that they provide some value in protecting
users from buggy drivers. Unfortunately, the crypto/ subsystem has many
buggy and untested drivers for off-CPU hardware accelerators on rare
platforms. These broken drivers get shipped to users, and there have
been multiple examples of the tests preventing these buggy drivers from
being used. So effectively, the tests are being relied on in production
kernels. I think this is kind of crazy (untested drivers should just
not be enabled at all), but that seems to be how things work currently.
Thus, reintroduce a kconfig option that controls the level of testing.
Call it CRYPTO_SELFTESTS_FULL instead of the original name
CRYPTO_MANAGER_EXTRA_TESTS, which was slightly misleading.
Moreover, given the "production kernel" use case, make CRYPTO_SELFTESTS
depend on EXPERT instead of DEBUG_KERNEL.
I also haven't reinstated all the #ifdefs in crypto/testmgr.c. Instead,
just rely on the compiler to optimize out unused code.
Fixes: 40b9969796bf ("crypto: testmgr - replace CRYPTO_MANAGER_DISABLE_TESTS with CRYPTO_SELFTESTS")
Fixes: 698de822780f ("crypto: testmgr - make it easier to enable the full set of tests")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Some drivers cannot have a fallback, e.g., because the key is held
in hardware. Allow these to be used with ahash by adding the bit
CRYPTO_ALG_NO_FALLBACK.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Harald Freudenberger <freude@linux.ibm.com>
|
|
The HKDF self-tests depend on the HMAC algorithms being registered.
HMAC is now registered at module_init, which put it at the same level as
HKDF. Move HKDF to late_initcall so that it runs afterwards.
Fixes: ef93f1562803 ("Revert "crypto: run initcalls for generic implementations earlier"")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi
Pull EFI updates from Ard Biesheuvel:
"Not a lot going on in the EFI tree this cycle. The only thing that
stands out is the new support for SBAT metadata, which was a bit
contentious when it was first proposed, because in the initial
incarnation, it would have required us to maintain a revocation index,
and bump it each time a vulnerability affecting UEFI secure boot got
fixed. This was shot down for obvious reasons.
This time, only the changes needed to emit the SBAT section into the
PE/COFF image are being carried upstream, and it is up to the distros
to decide what to put in there when creating and signing the build.
This only has the EFI zboot bits (which the distros will be using for
arm64); the x86 bzImage changes should be arriving next cycle,
presumably via the -tip tree.
Summary:
- Add support for emitting a .sbat section into the EFI zboot image,
so that downstreams can easily include revocation metadata in the
signed EFI images
- Align PE symbolic constant names with other projects
- Bug fix for the efi_test module
- Log the physical address and size of the EFI memory map when
failing to map it
- A kerneldoc fix for the EFI stub code"
* tag 'efi-next-for-v6.16' of git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi:
include: pe.h: Fix PE definitions
efi/efi_test: Fix missing pending status update in getwakeuptime
efi: zboot specific mechanism for embedding SBAT section
efi/libstub: Describe missing 'out' parameter in efi_load_initrd
efi: Improve logging around memmap init
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Paolo Abeni:
"Core:
- Implement the Device Memory TCP transmit path, allowing zero-copy
data transmission on top of TCP from e.g. GPU memory to the wire.
- Move all the IPv6 routing tables management outside the RTNL scope,
under its own lock and RCU. The route control path is now 3x times
faster.
- Convert queue related netlink ops to instance lock, reducing again
the scope of the RTNL lock. This improves the control plane
scalability.
- Refactor the software crc32c implementation, removing unneeded
abstraction layers and improving significantly the related
micro-benchmarks.
- Optimize the GRO engine for UDP-tunneled traffic, for a 10%
performance improvement in related stream tests.
- Cover more per-CPU storage with local nested BH locking; this is a
prep work to remove the current per-CPU lock in local_bh_disable()
on PREMPT_RT.
- Introduce and use nlmsg_payload helper, combining buffer bounds
verification with accessing payload carried by netlink messages.
Netfilter:
- Rewrite the procfs conntrack table implementation, improving
considerably the dump performance. A lot of user-space tools still
use this interface.
- Implement support for wildcard netdevice in netdev basechain and
flowtables.
- Integrate conntrack information into nft trace infrastructure.
- Export set count and backend name to userspace, for better
introspection.
BPF:
- BPF qdisc support: BPF-qdisc can be implemented with BPF struct_ops
programs and can be controlled in similar way to traditional qdiscs
using the "tc qdisc" command.
- Refactor the UDP socket iterator, addressing long standing issues
WRT duplicate hits or missed sockets.
Protocols:
- Improve TCP receive buffer auto-tuning and increase the default
upper bound for the receive buffer; overall this improves the
single flow maximum thoughput on 200Gbs link by over 60%.
- Add AFS GSSAPI security class to AF_RXRPC; it provides transport
security for connections to the AFS fileserver and VL server.
- Improve TCP multipath routing, so that the sources address always
matches the nexthop device.
- Introduce SO_PASSRIGHTS for AF_UNIX, to allow disabling SCM_RIGHTS,
and thus preventing DoS caused by passing around problematic FDs.
- Retire DCCP socket. DCCP only receives updates for bugs, and major
distros disable it by default. Its removal allows for better
organisation of TCP fields to reduce the number of cache lines hit
in the fast path.
- Extend TCP drop-reason support to cover PAWS checks.
Driver API:
- Reorganize PTP ioctl flag support to require an explicit opt-in for
the drivers, avoiding the problem of drivers not rejecting new
unsupported flags.
- Converted several device drivers to timestamping APIs.
- Introduce per-PHY ethtool dump helpers, improving the support for
dump operations targeting PHYs.
Tests and tooling:
- Add support for classic netlink in user space C codegen, so that
ynl-c can now read, create and modify links, routes addresses and
qdisc layer configuration.
- Add ynl sub-types for binary attributes, allowing ynl-c to output
known struct instead of raw binary data, clarifying the classic
netlink output.
- Extend MPTCP selftests to improve the code-coverage.
- Add tests for XDP tail adjustment in AF_XDP.
New hardware / drivers:
- OpenVPN virtual driver: offload OpenVPN data channels processing to
the kernel-space, increasing the data transfer throughput WRT the
user-space implementation.
- Renesas glue driver for the gigabit ethernet RZ/V2H(P) SoC.
- Broadcom asp-v3.0 ethernet driver.
- AMD Renoir ethernet device.
- ReakTek MT9888 2.5G ethernet PHY driver.
- Aeonsemi 10G C45 PHYs driver.
Drivers:
- Ethernet high-speed NICs:
- nVidia/Mellanox (mlx5):
- refactor the steering table handling to significantly
reduce the amount of memory used
- add support for complex matches in H/W flow steering
- improve flow streeing error handling
- convert to netdev instance locking
- Intel (100G, ice, igb, ixgbe, idpf):
- ice: add switchdev support for LLDP traffic over VF
- ixgbe: add firmware manipulation and regions devlink support
- igb: introduce support for frame transmission premption
- igb: adds persistent NAPI configuration
- idpf: introduce RDMA support
- idpf: add initial PTP support
- Meta (fbnic):
- extend hardware stats coverage
- add devlink dev flash support
- Broadcom (bnxt):
- add support for RX-side device memory TCP
- Wangxun (txgbe):
- implement support for udp tunnel offload
- complete PTP and SRIOV support for AML 25G/10G devices
- Ethernet NICs embedded and virtual:
- Google (gve):
- add device memory TCP TX support
- Amazon (ena):
- support persistent per-NAPI config
- Airoha:
- add H/W support for L2 traffic offload
- add per flow stats for flow offloading
- RealTek (rtl8211): add support for WoL magic packet
- Synopsys (stmmac):
- dwmac-socfpga 1000BaseX support
- add Loongson-2K3000 support
- introduce support for hardware-accelerated VLAN stripping
- Broadcom (bcmgenet):
- expose more H/W stats
- Freescale (enetc, dpaa2-eth):
- enetc: add MAC filter, VLAN filter RSS and loopback support
- dpaa2-eth: convert to H/W timestamping APIs
- vxlan: convert FDB table to rhashtable, for better scalabilty
- veth: apply qdisc backpressure on full ring to reduce TX drops
- Ethernet switches:
- Microchip (kzZ88x3): add ETS scheduler support
- Ethernet PHYs:
- RealTek (rtl8211):
- add support for WoL magic packet
- add support for PHY LEDs
- CAN:
- Adds RZ/G3E CANFD support to the rcar_canfd driver.
- Preparatory work for CAN-XL support.
- Add self-tests framework with support for CAN physical interfaces.
- WiFi:
- mac80211:
- scan improvements with multi-link operation (MLO)
- Qualcomm (ath12k):
- enable AHB support for IPQ5332
- add monitor interface support to QCN9274
- add multi-link operation support to WCN7850
- add 802.11d scan offload support to WCN7850
- monitor mode for WCN7850, better 6 GHz regulatory
- Qualcomm (ath11k):
- restore hibernation support
- MediaTek (mt76):
- WiFi-7 improvements
- implement support for mt7990
- Intel (iwlwifi):
- enhanced multi-link single-radio (EMLSR) support on 5 GHz links
- rework device configuration
- RealTek (rtw88):
- improve throughput for RTL8814AU
- RealTek (rtw89):
- add multi-link operation support
- STA/P2P concurrency improvements
- support different SAR configs by antenna
- Bluetooth:
- introduce HCI Driver protocol
- btintel_pcie: do not generate coredump for diagnostic events
- btusb: add HCI Drv commands for configuring altsetting
- btusb: add RTL8851BE device 0x0bda:0xb850
- btusb: add new VID/PID 13d3/3584 for MT7922
- btusb: add new VID/PID 13d3/3630 and 13d3/3613 for MT7925
- btnxpuart: implement host-wakeup feature"
* tag 'net-next-6.16' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1611 commits)
selftests/bpf: Fix bpf selftest build warning
selftests: netfilter: Fix skip of wildcard interface test
net: phy: mscc: Stop clearing the the UDPv4 checksum for L2 frames
net: openvswitch: Fix the dead loop of MPLS parse
calipso: Don't call calipso functions for AF_INET sk.
selftests/tc-testing: Add a test for HFSC eltree double add with reentrant enqueue behaviour on netem
net_sched: hfsc: Address reentrant enqueue adding class to eltree twice
octeontx2-pf: QOS: Refactor TC_HTB_LEAF_DEL_LAST callback
octeontx2-pf: QOS: Perform cache sync on send queue teardown
net: mana: Add support for Multi Vports on Bare metal
net: devmem: ncdevmem: remove unused variable
net: devmem: ksft: upgrade rx test to send 1K data
net: devmem: ksft: add 5 tuple FS support
net: devmem: ksft: add exit_wait to make rx test pass
net: devmem: ksft: add ipv4 support
net: devmem: preserve sockc_err
page_pool: fix ugly page_pool formatting
net: devmem: move list_add to net_devmem_bind_dmabuf.
selftests: netfilter: nft_queue.sh: include file transfer duration in log message
net: phy: mscc: Fix memory leak when using one step timestamping
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fix from Herbert Xu:
"Fix a buffer overflow regression in shash"
* tag 'v6.16-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: shash - Fix buffer overrun in import function
|
|
Only set the partial block length to zero if the algorithm is
block-only. Otherwise the descriptor context could be empty,
e.g., for digest_null.
Reported-by: syzbot+4851c19615d35f0e4d68@syzkaller.appspotmail.com
Fixes: 7650f826f7b2 ("crypto: shash - Handle partial blocks in API")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Fix memcpy_sglist to handle partially overlapping SG lists
- Use memcpy_sglist to replace null skcipher
- Rename CRYPTO_TESTS to CRYPTO_BENCHMARK
- Flip CRYPTO_MANAGER_DISABLE_TEST into CRYPTO_SELFTESTS
- Hide CRYPTO_MANAGER
- Add delayed freeing of driver crypto_alg structures
Compression:
- Allocate large buffers on first use instead of initialisation in scomp
- Drop destination linearisation buffer in scomp
- Move scomp stream allocation into acomp
- Add acomp scatter-gather walker
- Remove request chaining
- Add optional async request allocation
Hashing:
- Remove request chaining
- Add optional async request allocation
- Move partial block handling into API
- Add ahash support to hmac
- Fix shash documentation to disallow usage in hard IRQs
Algorithms:
- Remove unnecessary SIMD fallback code on x86 and arm/arm64
- Drop avx10_256 xts(aes)/ctr(aes) on x86
- Improve avx-512 optimisations for xts(aes)
- Move chacha arch implementations into lib/crypto
- Move poly1305 into lib/crypto and drop unused Crypto API algorithm
- Disable powerpc/poly1305 as it has no SIMD fallback
- Move sha256 arch implementations into lib/crypto
- Convert deflate to acomp
- Set block size correctly in cbcmac
Drivers:
- Do not use sg_dma_len before mapping in sun8i-ss
- Fix warm-reboot failure by making shutdown do more work in qat
- Add locking in zynqmp-sha
- Remove cavium/zip
- Add support for PCI device 0x17D8 to ccp
- Add qat_6xxx support in qat
- Add support for RK3576 in rockchip-rng
- Add support for i.MX8QM in caam
Others:
- Fix irq_fpu_usable/kernel_fpu_begin inconsistency during CPU bring-up
- Add new SEV/SNP platform shutdown API in ccp"
* tag 'v6.16-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (382 commits)
x86/fpu: Fix irq_fpu_usable() to return false during CPU onlining
crypto: qat - add missing header inclusion
crypto: api - Redo lookup on EEXIST
Revert "crypto: testmgr - Add hash export format testing"
crypto: marvell/cesa - Do not chain submitted requests
crypto: powerpc/poly1305 - add depends on BROKEN for now
Revert "crypto: powerpc/poly1305 - Add SIMD fallback"
crypto: ccp - Add missing tee info reg for teev2
crypto: ccp - Add missing bootloader info reg for pspv5
crypto: sun8i-ce - move fallback ahash_request to the end of the struct
crypto: octeontx2 - Use dynamic allocated memory region for lmtst
crypto: octeontx2 - Initialize cptlfs device info once
crypto: xts - Only add ecb if it is not already there
crypto: lrw - Only add ecb if it is not already there
crypto: testmgr - Add hash export format testing
crypto: testmgr - Use ahash for generic tfm
crypto: hmac - Add ahash support
crypto: testmgr - Ignore EEXIST on shash allocation
crypto: algapi - Add driver template support to crypto_inst_setname
crypto: shash - Set reqsize in shash_alg
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux
Pull CRC updates from Eric Biggers:
"Cleanups for the kernel's CRC (cyclic redundancy check) code:
- Use __ro_after_init where appropriate
- Remove unnecessary static_key on s390
- Rename some source code files
- Rename the crc32 and crc32c crypto API modules
- Use subsys_initcall instead of arch_initcall
- Restore maintainers for crc_kunit.c
- Fold crc16_byte() into crc16.c
- Add some SPDX license identifiers"
* tag 'crc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux:
lib/crc32: add SPDX license identifier
lib/crc16: unexport crc16_table and crc16_byte()
w1: ds2406: use crc16() instead of crc16_byte() loop
MAINTAINERS: add crc_kunit.c back to CRC LIBRARY
lib/crc: make arch-optimized code use subsys_initcall
crypto: crc32 - remove "generic" from file and module names
x86/crc: drop "glue" from filenames
sparc/crc: drop "glue" from filenames
s390/crc: drop "glue" from filenames
powerpc/crc: rename crc32-vpmsum_core.S to crc-vpmsum-template.S
powerpc/crc: drop "glue" from filenames
arm64/crc: drop "glue" from filenames
arm/crc: drop "glue" from filenames
s390/crc32: Remove no-op module init and exit functions
s390/crc32: Remove have_vxrs static key
lib/crc: make the CPU feature static keys __ro_after_init
|
|
When two crypto algorithm lookups occur at the same time with
different names for the same algorithm, e.g., ctr(aes-generic)
and ctr(aes), they will both be instantiated. However, only one
of them can be registered. The second instantiation will fail
with EEXIST.
Avoid failing the second lookup by making it retry, but only once
because there are tricky names such as gcm_base(ctr(aes),ghash)
that will always fail, despite triggering instantiation and EEXIST.
Reported-by: Ingo Franzki <ifranzki@linux.ibm.com>
Fixes: 2825982d9d66 ("[CRYPTO] api: Added event notification")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This reverts commit 18c438b228558e05ede7dccf947a6547516fc0c7.
The s390 hmac and sha3 algorithms are failing the test. Revert
the change until they have been fixed.
Reported-by: Ingo Franzki <ifranzki@linux.ibm.com>
Link: https://lore.kernel.org/all/623a7fcb-b4cb-48e6-9833-57ad2b32a252@linux.ibm.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Cross-merge networking fixes after downstream PR (net-6.15-rc8).
Conflicts:
80f2ab46c2ee ("irdma: free iwdev->rf after removing MSI-X")
4bcc063939a5 ("ice, irdma: fix an off by one in error handling code")
c24a65b6a27c ("iidc/ice/irdma: Update IDC to support multiple consumers")
https://lore.kernel.org/20250513130630.280ee6c5@canb.auug.org.au
No extra adjacent changes.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
* Rename constants to their standard PE names:
- MZ_MAGIC -> IMAGE_DOS_SIGNATURE
- PE_MAGIC -> IMAGE_NT_SIGNATURE
- PE_OPT_MAGIC_PE32_ROM -> IMAGE_ROM_OPTIONAL_HDR_MAGIC
- PE_OPT_MAGIC_PE32 -> IMAGE_NT_OPTIONAL_HDR32_MAGIC
- PE_OPT_MAGIC_PE32PLUS -> IMAGE_NT_OPTIONAL_HDR64_MAGIC
- IMAGE_DLL_CHARACTERISTICS_NX_COMPAT -> IMAGE_DLLCHARACTERISTICS_NX_COMPAT
* Import constants and their description from readpe and file projects
which contains current up-to-date information:
- IMAGE_FILE_MACHINE_*
- IMAGE_FILE_*
- IMAGE_SUBSYSTEM_*
- IMAGE_DLLCHARACTERISTICS_*
- IMAGE_DLLCHARACTERISTICS_EX_*
- IMAGE_DEBUG_TYPE_*
* Add missing IMAGE_SCN_* constants and update their incorrect description
* Fix incorrect value of IMAGE_SCN_MEM_PURGEABLE constant
* Add description for win32_version and loader_flags PE fields
Signed-off-by: Pali Rohár <pali@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
|
|
Only add ecb to the cipher name if it isn't already ecb.
Also use memcmp instead of strncmp since these strings are all
stored in an array of length CRYPTO_MAX_ALG_NAME.
Fixes: f1c131b45410 ("crypto: xts - Convert to skcipher")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Only add ecb to the cipher name if it isn't already ecb.
Also use memcmp instead of strncmp since these strings are all
stored in an array of length CRYPTO_MAX_ALG_NAME.
Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher")
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202505151503.d8a6cf10-lkp@intel.com
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Ensure that the hash state can be exported to and imported from
the generic algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As shash is being phased out, use ahash for the generic tfm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add ahash support to hmac so that drivers that can't do hmac in
hardware do not have to implement duplicate copies of hmac.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Soon hmac will support ahash. For compatibility hmac still supports
shash so it is possible for two hmac algorithms to be registered at
the same time. The shash algorithm will have the driver name
"hmac-shash(XXX-driver)". Due to a quirk in the API, there is no way
to locate the shash algorithm using the name "hmac(XXX-driver)". It
has to be addressed as either "hmac(XXX)" or "hmac-shash(XXX-driver)".
Looking it up with "hmac(XXX-driver)" will simply trigger the creation
of another instance, and on the second instantiation this will fail
with EEXIST.
Catch the error EEXIST along with ENOENT since it is expected.
If a real shash algorithm came this way, it would be addressed using
the proper name "hmac-shash(XXX-driver)".
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add support to crypto_inst_setname for having a driver template
name that differs from the algorithm template name.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Make reqsize static for shash algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add export_import and import_core so that hmac can be used as a
fallback by block-only drivers.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The shash desc needs to be zeroed after use in setkey as it is
not finalised (finalisation automatically zeroes it).
Also remove the final function as it's been superseded by finup.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Provide an option to handle the partial blocks in the ahash API.
Almost every hash algorithm has a block size and are only able
to hash partial blocks on finalisation.
As a first step disable virtual address support for algorithms
with state sizes larger than HASH_MAX_STATESIZE. This is OK as
virtual addresses are currently only used on synchronous fallbacks.
This means ahash_do_req_chain only needs to handle synchronous
fallbacks, removing the complexities of saving the request state.
Also move the saved request state into the ahash_request object
as nesting is no longer possible.
Add a scatterlist to ahash_request to store the partial block.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add export_core and import_core hooks. These are intended to be
used by algorithms which are wrappers around block-only algorithms,
but are not themselves block-only, e.g., hmac.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
If accept(2) is called on socket type algif_hash with
MSG_MORE flag set and crypto_ahash_import fails,
sk2 is freed. However, it is also freed in af_alg_release,
leading to slab-use-after-free error.
Fixes: fe869cdb89c9 ("crypto: algif_hash - User-space interface for hash operations")
Cc: <stable@vger.kernel.org>
Signed-off-by: Ivan Pravdin <ipravdin.official@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
crypto/testmgr.c is compiled only when CRYPTO_MANAGER is enabled. To
make CRYPTO_SELFTESTS work as expected when CRYPTO_MANAGER doesn't get
enabled for another reason, automatically set CRYPTO_MANAGER to the
value of CRYPTO_ALGAPI when CRYPTO_SELFTESTS is enabled.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
There is no reason for people configuring the kernel to be asked about
CRYPTO_MANAGER, so make it a hidden symbol.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rename the noextratests module parameter to noslowtests, and replace
other remaining mentions of "extra" in the code with "slow". This
addresses confusion regarding the word "extra" like that seen at
https://lore.kernel.org/r/6cecf2de-9aa0-f6ea-0c2d-8e974a1a820b@huawei.com/.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Currently the full set of crypto self-tests requires
CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y. This is problematic in two ways.
First, developers regularly overlook this option. Second, the
description of the tests as "extra" sometimes gives the impression that
it is not required that all algorithms pass these tests.
Given that the main use case for the crypto self-tests is for
developers, make enabling CONFIG_CRYPTO_SELFTESTS=y just enable the full
set of crypto self-tests by default.
The slow tests can still be disabled by adding the command-line
parameter cryptomgr.noextratests=1, soon to be renamed to
cryptomgr.noslowtests=1. The only known use case for doing this is for
people trying to use the crypto self-tests to satisfy the FIPS 140-3
pre-operational self-testing requirements when the kernel is being
validated as a FIPS 140-3 cryptographic module.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The negative-sense of CRYPTO_MANAGER_DISABLE_TESTS is a longstanding
mistake that regularly causes confusion. Especially bad is that you can
have CRYPTO=n && CRYPTO_MANAGER_DISABLE_TESTS=n, which is ambiguous.
Replace CRYPTO_MANAGER_DISABLE_TESTS with CRYPTO_SELFTESTS which has the
expected behavior.
The tests continue to be disabled by default.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The cryptomgr.panic_on_fail=1 kernel command-line parameter is not very
useful now that the tests have been fixed to WARN on failure, since
developers can just use panic_on_warn=1 instead. There's no need for a
special option just for the crypto self-tests. Remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
tcrypt is actually a benchmarking module and not the actual tests. This
regularly causes confusion. Update the kconfig option name and help
text accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Make null_skcipher_crypt() use memcpy_sglist() instead of the
skcipher_walk API, as this is simpler.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
There is no reason to have separate CRYPTO_NULL2 and CRYPTO_NULL
options. Just merge them into CRYPTO_NULL.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
crypto_get_default_null_skcipher() and
crypto_put_default_null_skcipher() are no longer used, so remove them.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The krb5enc code does not use any of the so-called "null algorithms", so
it does not need to select CRYPTO_NULL. Presumably this unused
dependency got copied from one of the other kconfig options.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
For copying data between two scatterlists, just use memcpy_sglist()
instead of the so-called "null skcipher". This is much simpler.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
For copying data between two scatterlists, just use memcpy_sglist()
instead of the so-called "null skcipher". This is much simpler.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
For copying data between two scatterlists, just use memcpy_sglist()
instead of the so-called "null skcipher". This is much simpler.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
For copying data between two scatterlists, just use memcpy_sglist()
instead of the so-called "null skcipher". This is much simpler.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add explicit array bounds to the function prototypes for the parameters
that didn't already get handled by the conversion to use chacha_state:
- chacha_block_*():
Change 'u8 *out' or 'u8 *stream' to u8 out[CHACHA_BLOCK_SIZE].
- hchacha_block_*():
Change 'u32 *out' or 'u32 *stream' to u32 out[HCHACHA_OUT_WORDS].
- chacha_init():
Change 'const u32 *key' to 'const u32 key[CHACHA_KEY_WORDS]'.
Change 'const u8 *iv' to 'const u8 iv[CHACHA_IV_SIZE]'.
No functional changes. This just makes it clear when fixed-size arrays
are expected.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The ChaCha state matrix is 16 32-bit words. Currently it is represented
in the code as a raw u32 array, or even just a pointer to u32. This
weak typing is error-prone. Instead, introduce struct chacha_state:
struct chacha_state {
u32 x[16];
};
Convert all ChaCha and HChaCha functions to use struct chacha_state.
No functional changes.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add crypto_ahash_export_core and crypto_ahash_import_core. For
now they only differ from the normal export/import functions when
going through shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As sync ahash algorithms (currently there are none) are used without
a fallback, ensure that they obey the MAX_SYNC_HASH_REQSIZE rule
just like shash algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Mark shash algorithms with the REQ_VIRT bit as they can handle
virtual addresses as is.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that all shash algorithms have converted over to the generic
export format, limit the shash state size to HASH_MAX_STATESIZE.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the shash partial block API by default. Add a separate set
of lib shash algorithms to preserve testing coverage until lib/sha256
has its own tests.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The shash interface already handles partial blocks, use it for
sha224-generic and sha256-generic instead of going through the
lib/sha256 interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As chaining has been removed, all that remains of REQ_CHAIN is
just virtual address support. Rename it before the reintroduction
of batching creates confusion.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The folios contain references to the request itself so they must
be setup again in the cloned request.
Fixes: 5f3437e9c89e ("crypto: acomp - Simplify folio handling")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This reverts commit c4741b23059794bd99beef0f700103b0d983b3fd.
Crypto API self-tests no longer run at registration time and now
occur either at late_initcall or upon the first use.
Therefore the premise of the above commit no longer exists. Revert
it and subsequent additions of subsys_initcall and arch_initcall.
Note that lib/crypto calls will stay at subsys_initcall (or rather
downgraded from arch_initcall) because they may need to occur
before Crypto API registration.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As has been done for various other algorithms, rework the design of the
SHA-256 library to support arch-optimized implementations, and make
crypto/sha256.c expose both generic and arch-optimized shash algorithms
that wrap the library functions.
This allows users of the SHA-256 library functions to take advantage of
the arch-optimized code, and this makes it much simpler to integrate
SHA-256 for each architecture.
Note that sha256_base.h is not used in the new design. It will be
removed once all the architecture-specific code has been updated.
Move the generic block function into its own module to avoid a circular
dependency from libsha256.ko => sha256-$ARCH.ko => libsha256.ko.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Add export and import functions to maintain existing export format.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As there are no in-kernel users of the Crypto API poly1305 left,
remove it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As poly1305 no longer has any in-kernel users, remove its tests.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since the poly1305 algorithm is fixed, there is no point in going
through the Crypto API for it. Use the lib/crypto poly1305 interface
instead.
For compatiblity keep the poly1305 parameter in the algorithm name.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Cross-merge networking fixes after downstream PR (net-6.15-rc5).
No conflicts or adjacent changes.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
The recent patch to make the rfc3961 simplified code use sg_miter rather
than manually walking the scatterlist to hash the contents of a buffer
described by that scatterlist failed to take the starting offset into
account.
This is indicated by the selftests reporting:
krb5: Running aes128-cts-hmac-sha256-128 mic
krb5: !!! TESTFAIL crypto/krb5/selftest.c:446
krb5: MIC mismatch
Fix this by calling sg_miter_skip() before doing the loop to advance
by the offset.
This only affects packet signing modes and not full encryption in RxGK
because, for full encryption, the message digest is handled inside the
authenc and krb5enc drivers.
Note: Nothing in linus/master uses the krb5lib, though the bug is there.
It is used by AF_RXRPC's RxGK implementation in -next, no need to backport.
Fixes: da6f9bf40ac2 ("crypto: krb5 - Use SG miter instead of doing it by hand")
Reported-by: Marc Dionne <marc.dionne@auristor.com>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Link: https://patch.msgid.link/3824017.1745835726@warthog.procyon.org.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Since crc32_generic.c and crc32c_generic.c now expose both the generic
and architecture-optimized implementations via the crypto_shash API,
rather than just the generic implementations as they originally did,
remove the "generic" part of the filenames and module names:
crypto/crc32-generic.c => crypto/crc32.c
crypto/crc32c-generic.c => crypto/crc32c.c
crc32-generic.ko => crc32-cryptoapi.ko
crc32c-generic.ko => crc32c-cryptoapi.ko
The reason for adding the -cryptoapi suffixes to the module names is to
avoid a module name collision with crc32.ko which is the library API.
We could instead rename the library module to libcrc32.ko. However,
while lib/crypto/ uses that convention, the rest of lib/ doesn't. Since
the library API is the primary API for CRC-32, I'd like to keep the
unsuffixed name for it and make the Crypto API modules use a suffix.
Acked-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20250428162458.29732-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
|
|
Move the generic part of skcipher walk into scatterwalk, and use
it to implement memcpy_sglist.
This makes memcpy_sglist do the right thing when two distinct SG
lists contain identical subsets (e.g., the AD part of AEAD).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
The accelerated export format on x86/arm64 is easier to use so
switch the generic polyval algorithm to use that format instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Do not copy the exit function in crypto_clone_tfm as it should
only be set after init_tfm or clone_tfm has succeeded.
Move the setting into crypto_clone_ahash and crypto_clone_shash
instead.
Also clone the fb if necessary.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add a helper to clone crypto requests and eliminate code duplication.
Use kmemdup in the helper.
Also add an fb field to crypto_tfm.
This also happens to fix the existing implementations which were
buggy.
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202504230118.1CxUaUoX-lkp@intel.com/
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202504230004.c7mrY0C6-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that the architecture-optimized Poly1305 kconfig symbols are defined
regardless of CRYPTO, there is no need for CRYPTO_LIB_POLY1305 to select
CRYPTO. So, remove that. This makes the indirection through the
CRYPTO_LIB_POLY1305_INTERNAL symbol unnecessary, so get rid of that and
just use CRYPTO_LIB_POLY1305 directly. Finally, make the fallback to
the generic implementation use a default value instead of a select; this
makes it consistent with how the arch-optimized code gets enabled and
also with how CRYPTO_LIB_BLAKE2S_GENERIC gets enabled.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that the architecture-optimized ChaCha kconfig symbols are defined
regardless of CRYPTO, there is no need for CRYPTO_LIB_CHACHA to select
CRYPTO. So, remove that. This makes the indirection through the
CRYPTO_LIB_CHACHA_INTERNAL symbol unnecessary, so get rid of that and
just use CRYPTO_LIB_CHACHA directly. Finally, make the fallback to the
generic implementation use a default value instead of a select; this
makes it consistent with how the arch-optimized code gets enabled and
also with how CRYPTO_LIB_BLAKE2S_GENERIC gets enabled.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Remove the private and obsolete CRYPTO_ALG_ENGINE bit which is
conflicting with the new CRYPTO_ALG_DUP_FIRST bit.
Reported-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Fixes: f1440a90465b ("crypto: api - Add support for duplicating algorithms before registration")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Merge crypto tree to pick up the scompress scratch refcount fix. The
merge resolution is slightly non-trivial as the context has shifted.
|
|
Commit ddd0a42671c0 only increments scomp_scratch_users when it was 0,
causing a panic when using ipcomp:
Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN NOPTI
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
CPU: 1 UID: 0 PID: 619 Comm: ping Tainted: G N 6.15.0-rc3-net-00032-ga79be02bba5c #41 PREEMPT(full)
Tainted: [N]=TEST
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.3-1-1 04/01/2014
RIP: 0010:inflate_fast+0x5a2/0x1b90
[...]
Call Trace:
<IRQ>
zlib_inflate+0x2d60/0x6620
deflate_sdecompress+0x166/0x350
scomp_acomp_comp_decomp+0x45f/0xa10
scomp_acomp_decompress+0x21/0x120
acomp_do_req_chain+0x3e5/0x4e0
ipcomp_input+0x212/0x550
xfrm_input+0x2de2/0x72f0
[...]
Kernel panic - not syncing: Fatal exception in interrupt
Kernel Offset: disabled
---[ end Kernel panic - not syncing: Fatal exception in interrupt ]---
Instead, let's keep the old increment, and decrement back to 0 if the
scratch allocation fails.
Fixes: ddd0a42671c0 ("crypto: scompress - Fix scratch allocation failure handling")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Provide an option to handle the partial blocks in the shash API.
Almost every hash algorithm has a block size and are only able
to hash partial blocks on finalisation.
Rather than duplicating the partial block handling many times,
add this functionality to the shash API.
It is optional (e.g., hmac would never need this by relying on
the partial block handling of the underlying hash), and to enable
it set the bit CRYPTO_AHASH_ALG_BLOCK_ONLY.
The export format is always that of the underlying hash export,
plus the partial block buffer, followed by a single-byte for the
partial block length.
Set the bit CRYPTO_AHASH_ALG_FINAL_NONZERO to withhold an extra
byte in the partial block. This will come in handy when this
is extended to ahash where hardware often can't deal with a
zero-length final.
It will also be used for algorithms requiring an extra block for
finalisation (e.g., cmac).
As an optimisation, set the bit CRYPTO_AHASH_ALG_FINUP_MAX if
the algorithm wishes to get as much data as possible instead of
just the last partial block.
The descriptor will be zeroed after finalisation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Merge crypto tree to pick up scompress off-by-one patch. The
merge resolution is non-trivial as the dst handling code has been
moved in front of the src.
|
|
Fix off-by-one bug in the last page calculation for src and dst.
Reported-by: Nhat Pham <nphamcs@gmail.com>
Fixes: 2d3553ecb4e3 ("crypto: scomp - Remove support for some non-trivial SG lists")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The return statements were missing which causes REQ_CHAIN algorithms
to execute twice for every request.
Reported-by: Eric Biggers <ebiggers@kernel.org>
Fixes: 64929fe8c0a4 ("crypto: acomp - Remove request chaining")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This reverts commit 99585c2192cb1ce212876e82ef01d1c98c7f4699.
Remove the acomp multibuffer tests as they are buggy.
Reported-by: Dmitry Antipov <dmantipov@yandex.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The recent code changes in this function triggered a false-positive
maybe-uninitialized warning in software_key_query. Rearrange the
code by moving the sig/tfm variables into the if clause where they
are actually used.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add an atomic flag to the acomp walk and use that in deflate.
Due to the use of a per-cpu context, it is impossible to sleep
during the walk in deflate.
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202504151654.4c3b6393-lkp@intel.com
Fixes: 08cabc7d3c86 ("crypto: deflate - Convert to acomp")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Following the example of the crc32, crc32c, and chacha code, make the
crypto subsystem register both generic and architecture-optimized
poly1305 shash algorithms, both implemented on top of the appropriate
library functions. This eliminates the need for every architecture to
implement the same shash glue code.
Note that the poly1305 shash requires that the key be prepended to the
data, which differs from the library functions where the key is simply a
parameter to poly1305_init(). Previously this was handled at a fairly
low level, polluting the library code with shash-specific code.
Reorganize things so that the shash code handles this quirk itself.
Also, to register the architecture-optimized shashes only when
architecture-optimized code is actually being used, add a function
poly1305_is_arch_optimized() and make each arch implement it. Change
each architecture's Poly1305 module_init function to arch_initcall so
that the CPU feature detection is guaranteed to run before
poly1305_is_arch_optimized() gets called by crypto/poly1305.c. (In
cases where poly1305_is_arch_optimized() just returns true
unconditionally, using arch_initcall is not strictly needed, but it's
still good to be consistent across architectures.)
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Ard's recent series of patches removing 'comp' implementations
left behind a bunch of trivial structs, remove them.
These are:
crypto842_ctx - commit 2d985ff0072f ("crypto: 842 - drop obsolete 'comp'
implementation")
lz4_ctx - commit 33335afe33c9 ("crypto: lz4 - drop obsolete 'comp'
implementation")
lz4hc_ctx - commit dbae96559eef ("crypto: lz4hc - drop obsolete
'comp' implementation")
lzo_ctx - commit a3e43a25bad0 ("crypto: lzo - drop obsolete
'comp' implementation")
lzorle_ctx - commit d32da55c5b0c ("crypto: lzo-rle - drop obsolete
'comp' implementation")
Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The block size of a hash algorithm is meant to be the number of
bytes its block function can handle. For cbcmac that should be
the block size of the underlying block cipher instead of one.
Set the block size of all cbcmac implementations accordingly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Move the sm3 library code into lib/crypto.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Allow any ahash to be used with a stack request, with optional
dynamic allocation when async is needed. The intended usage is:
HASH_REQUEST_ON_STACK(req, tfm);
...
err = crypto_ahash_digest(req);
/* The request cannot complete synchronously. */
if (err == -EAGAIN) {
/* This will not fail. */
req = HASH_REQUEST_CLONE(req, gfp);
/* Redo operation. */
err = crypto_ahash_digest(req);
}
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As all users of the dynamic descsize have been converted to use
a static one instead, remove support for dynamic descsize.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rather than setting descsize in init_tfm, make it an algorithm
attribute and set it during instance construction.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
If the bit CRYPTO_ALG_DUP_FIRST is set, an algorithm will be
duplicated by kmemdup before registration. This is inteded for
hardware-based algorithms that may be unplugged at will.
Do not use this if the algorithm data structure is embedded in a
bigger data structure. Perform the duplication in the driver
instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The current algorithm unregistration mechanism originated from
software crypto. The code relies on module reference counts to
stop in-use algorithms from being unregistered. Therefore if
the unregistration function is reached, it is assumed that the
module reference count has hit zero and thus the algorithm reference
count should be exactly 1.
This is completely broken for hardware devices, which can be
unplugged at random.
Fix this by allowing algorithms to be destroyed later if a destroy
callback is provided.
Reported-by: Sean Anderson <sean.anderson@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
If the destination buffer has a fixed length, strscpy() automatically
determines its size using sizeof() when the argument is omitted. This
makes the explicit size argument unnecessary - remove it.
No functional changes intended.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
When user space issues a KEYCTL_PKEY_QUERY system call for a NIST P521
key, the key_size is incorrectly reported as 528 bits instead of 521.
That's because the key size obtained through crypto_sig_keysize() is in
bytes and software_key_query() multiplies by 8 to yield the size in bits.
The underlying assumption is that the key size is always a multiple of 8.
With the recent addition of NIST P521, that's no longer the case.
Fix by returning the key_size in bits from crypto_sig_keysize() and
adjusting the calculations in software_key_query().
The ->key_size() callbacks of sig_alg algorithms now return the size in
bits, whereas the ->digest_size() and ->max_size() callbacks return the
size in bytes. This matches with the units in struct keyctl_pkey_query.
Fixes: a7d45ba77d3d ("crypto: ecdsa - Register NIST P521 and extend test suite")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
KEYCTL_PKEY_QUERY system calls for ecdsa keys return the key size as
max_enc_size and max_dec_size, even though such keys cannot be used for
encryption/decryption. They're exclusively for signature generation or
verification.
Only rsa keys with pkcs1 encoding can also be used for encryption or
decryption.
Return 0 instead for ecdsa keys (as well as ecrdsa keys).
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the common reqsize field and remove reqsize from ahash_alg.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Remove the type-specific reqsize field in favour of the common one.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the common reqsize if present.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rather than storing the folio as is and handling it later, convert
it to a scatterlist right away.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add a new helper ACOMP_REQUEST_CLONE that will transform a stack
request into a dynamically allocated one if possible, and otherwise
switch it over to the sycnrhonous fallback transform. The intended
usage is:
ACOMP_STACK_ON_REQUEST(req, tfm);
...
err = crypto_acomp_compress(req);
/* The request cannot complete synchronously. */
if (err == -EAGAIN) {
/* This will not fail. */
req = ACOMP_REQUEST_CLONE(req, gfp);
/* Redo operation. */
err = crypto_acomp_compress(req);
}
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add a helper to create an on-stack fallback request from a given
request. Use this helper in acomp_do_nondma.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use kzalloc() to zero out the one-element array instead of using
kmalloc() followed by a manual NUL-termination.
No functional changes intended.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Reviewed-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Request chaining requires the user to do too much book keeping.
Remove it from ahash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This reverts commit c664f034172705a75f3f8a0c409b9bf95b633093.
Remove the multibuffer ahash speed tests again.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Request chaining requires the user to do too much book keeping.
Remove it from acomp.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Remove request chaining support from deflate.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This reverts commit 99585c2192cb1ce212876e82ef01d1c98c7f4699.
Remove the acomp multibuffer tests so that the interface can be
redesigned.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Merge crypto tree to pick up scompress and ahash fixes. The
scompress fix becomes mostly unnecessary as the bugs no longer
exist with the new acompress code. However, keep the NULL assignment
in crypto_acomp_free_streams so that if the user decides to call
crypto_acomp_alloc_streams again it will work.
|
|
Disable hash request chaining in case a driver that copies an
ahash_request object by hand accidentally triggers chaining.
Reported-by: Manorit Chawdhry <m-chawdhry@ti.com>
Fixes: f2ffe5a9183d ("crypto: hash - Add request chaining API")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Manorit Chawdhry <m-chawdhry@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
In order to use scomp_free_streams to free the partially allocted
streams in the allocation error path, move the alg->stream assignment
to the beginning. Also check for error pointers in scomp_free_streams
before freeing the ctx.
Finally set alg->stream to NULL to not break subsequent attempts
to allocate the streams.
Fixes: 3d72ad46a23a ("crypto: acomp - Move stream management into scomp layer")
Reported-by: syzkaller <syzkaller@googlegroups.com>
Co-developed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Co-developed-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Merge crypto tree to pick up scompress and caam fixes. The scompress
fix has a non-trivial resolution as the code in question has moved
over to acompress.
|
|
As the scomp streams are freed when an algorithm is unregistered,
it is possible that the algorithm has never been used at all (e.g.,
an algorithm that does not have a self-test). So test whether the
streams exist before freeing them.
Reported-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Fixes: 3d72ad46a23a ("crypto: acomp - Move stream management into scomp layer")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
<crypto/internal/chacha.h> is now included only by crypto/chacha.c, so
fold it into there.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Following the example of the crc32 and crc32c code, make the crypto
subsystem register both generic and architecture-optimized chacha20,
xchacha20, and xchacha12 skcipher algorithms, all implemented on top of
the appropriate library functions. This eliminates the need for every
architecture to implement the same skcipher glue code.
To register the architecture-optimized skciphers only when
architecture-optimized code is actually being used, add a function
chacha_is_arch_optimized() and make each arch implement it. Change each
architecture's ChaCha module_init function to arch_initcall so that the
CPU feature detection is guaranteed to run before
chacha_is_arch_optimized() gets called by crypto/chacha.c. In the case
of s390, remove the CPU feature based module autoloading, which is no
longer needed since the module just gets pulled in via function linkage.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As deflate has been converted over to acomp, and cavium zip has been
removed, there are no longer any scomp algorithms that can be used
by IPsec.
Since IPsec was the only user of the dst scratch buffer, remove it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This based on work by Ard Biesheuvel <ardb@kernel.org>.
Convert deflate from scomp to acomp. This removes the need for
the caller to linearise the source and destination.
Link: https://lore.kernel.org/all/20230718125847.3869700-21-ardb@kernel.org/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add acomp_walk which is similar to skcipher_walk but tailored for
acomp.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Move the dynamic stream allocation code into acomp and make it
available as a helper for acomp algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Per-cpu buffers can be wasteful when the number of CPUs is large,
especially if the buffer itself is likely to never be used. Reduce
such wastage by only allocating them on first use of a particular
CPU.
On start-up allocate a single buffer on the first possible CPU.
For every other CPU a work struct will be scheduled on first use
to allocate the buffer for that CPU. Until the allocation succeeds
simply use the first CPU's buffer which is protected under a spin
lock.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Move the cra_type->destroy call out of crypto_alg_put and into
crypto_unregister_alg and crypto_free_instance. This ensures
that it's always done in process context so calls such as flush_work
can be done.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Commit 9ae4577bc077 ("crypto: api - Use work queue in
crypto_destroy_instance") introduced a work struct to free an
instance after the last user goes away.
Move the delayed work from the instance into its template so that
when the template is unregistered it can ensure that all its
instances have been freed before returning.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fix from Herbert Xu:
- revert the multibuffer hash testing as it is buggy
* tag 'v6.15-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
Revert "crypto: testmgr - Add multibuffer hash testing"
|
|
This reverts commit 8b54e6a8f4156ed43627f40300b0711dc977fbc1.
The multibuffer tests has a number of bugs. For example, the SG
lists for the filler requests weren't initialised properly, and
it fails to take data-keyed algorithms such as poly1305 into account.
More importantly, the chaining interface itself is under review.
Revert this until the interface is fully settled.
Reported-by: Manorit Chawdhry <m-chawdhry@ti.com>
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202503281658.7a078821-lkp@intel.com
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Remove legacy compression interface
- Improve scatterwalk API
- Add request chaining to ahash and acomp
- Add virtual address support to ahash and acomp
- Add folio support to acomp
- Remove NULL dst support from acomp
Algorithms:
- Library options are fuly hidden (selected by kernel users only)
- Add Kerberos5 algorithms
- Add VAES-based ctr(aes) on x86
- Ensure LZO respects output buffer length on compression
- Remove obsolete SIMD fallback code path from arm/ghash-ce
Drivers:
- Add support for PCI device 0x1134 in ccp
- Add support for rk3588's standalone TRNG in rockchip
- Add Inside Secure SafeXcel EIP-93 crypto engine support in eip93
- Fix bugs in tegra uncovered by multi-threaded self-test
- Fix corner cases in hisilicon/sec2
Others:
- Add SG_MITER_LOCAL to sg miter
- Convert ubifs, hibernate and xfrm_ipcomp from legacy API to acomp"
* tag 'v6.15-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (187 commits)
crypto: testmgr - Add multibuffer acomp testing
crypto: acomp - Fix synchronous acomp chaining fallback
crypto: testmgr - Add multibuffer hash testing
crypto: hash - Fix synchronous ahash chaining fallback
crypto: arm/ghash-ce - Remove SIMD fallback code path
crypto: essiv - Replace memcpy() + NUL-termination with strscpy()
crypto: api - Call crypto_alg_put in crypto_unregister_alg
crypto: scompress - Fix incorrect stream freeing
crypto: lib/chacha - remove unused arch-specific init support
crypto: remove obsolete 'comp' compression API
crypto: compress_null - drop obsolete 'comp' implementation
crypto: cavium/zip - drop obsolete 'comp' implementation
crypto: zstd - drop obsolete 'comp' implementation
crypto: lzo - drop obsolete 'comp' implementation
crypto: lzo-rle - drop obsolete 'comp' implementation
crypto: lz4hc - drop obsolete 'comp' implementation
crypto: lz4 - drop obsolete 'comp' implementation
crypto: deflate - drop obsolete 'comp' implementation
crypto: 842 - drop obsolete 'comp' implementation
crypto: nx - Migrate to scomp API
...
|
|
Pull block updates from Jens Axboe:
- Fixes for integrity handling
- NVMe pull request via Keith:
- Secure concatenation for TCP transport (Hannes)
- Multipath sysfs visibility (Nilay)
- Various cleanups (Qasim, Baruch, Wang, Chen, Mike, Damien, Li)
- Correct use of 64-bit BARs for pci-epf target (Niklas)
- Socket fix for selinux when used in containers (Peijie)
- MD pull request via Yu:
- fix recovery can preempt resync (Li Nan)
- fix md-bitmap IO limit (Su Yue)
- fix raid10 discard with REQ_NOWAIT (Xiao Ni)
- fix raid1 memory leak (Zheng Qixing)
- fix mddev uaf (Yu Kuai)
- fix raid1,raid10 IO flags (Yu Kuai)
- some refactor and cleanup (Yu Kuai)
- Series cleaning up and fixing bugs in the bad block handling code
- Improve support for write failure simulation in null_blk
- Various lock ordering fixes
- Fixes for locking for debugfs attributes
- Various ublk related fixes and improvements
- Cleanups for blk-rq-qos wait handling
- blk-throttle fixes
- Fixes for loop dio and sync handling
- Fixes and cleanups for the auto-PI code
- Block side support for hardware encryption keys in blk-crypto
- Various cleanups and fixes
* tag 'for-6.15/block-20250322' of git://git.kernel.dk/linux: (105 commits)
nvmet: replace max(a, min(b, c)) by clamp(val, lo, hi)
nvme-tcp: fix selinux denied when calling sock_sendmsg
nvmet: pci-epf: Always configure BAR0 as 64-bit
nvmet: Remove duplicate uuid_copy
nvme: zns: Simplify nvme_zone_parse_entry()
nvmet: pci-epf: Remove redundant 'flush_workqueue()' calls
nvmet-fc: Remove unused functions
nvme-pci: remove stale comment
nvme-fc: Utilise min3() to simplify queue count calculation
nvme-multipath: Add visibility for queue-depth io-policy
nvme-multipath: Add visibility for numa io-policy
nvme-multipath: Add visibility for round-robin io-policy
nvmet: add tls_concat and tls_key debugfs entries
nvmet-tcp: support secure channel concatenation
nvmet: Add 'sq' argument to alloc_ctrl_args
nvme-fabrics: reset admin connection for secure concatenation
nvme-tcp: request secure channel concatenation
nvme-keyring: add nvme_tls_psk_refresh()
nvme: add nvme_auth_derive_tls_psk()
nvme: add nvme_auth_generate_digest()
...
|
|
Add rudimentary multibuffer acomp testing. Testing coverage is
extended to compression vectors only. However, as the compression
vectors are compressed and then decompressed, this covers both
compression and decompression.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The synchronous acomp fallback code path is broken because the
completion code path assumes that the state object is always set
but this is only done for asynchronous algorithms.
First of all remove the assumption on the completion code path
by passing in req0 instead of the state. However, also remove
the conditional setting of the state since it's always in the
request object anyway.
Fixes: b67a02600372 ("crypto: acomp - Add request chaining and virtual addresses")
Reported-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This is based on a patch by Eric Biggers <ebiggers@google.com>.
Add limited self-test for multibuffer hash code path. This tests
only a single request in chain of a random length. The other
requests are either all of the same length as the one being tested,
or random lengths between 0 and PAGE_SIZE * 2 * XBUFSIZE.
Potential extension include testing all requests rather than just
the single one.
Link: https://lore.kernel.org/all/20241001153718.111665-3-ebiggers@kernel.org/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The synchronous ahash fallback code paths are broken because the
ahash_restore_req assumes there is always a state object. Fix this
by removing the state from ahash_restore_req and localising it to
the asynchronous completion callback.
Also add a missing synchronous finish call in ahash_def_digest_finish.
Fixes: f2ffe5a9183d ("crypto: hash - Add request chaining API")
Fixes: 439963cdc3aa ("crypto: ahash - Add virtual address support")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use strscpy() to copy the NUL-terminated string 'p' to the destination
buffer instead of using memcpy() followed by a manual NUL-termination.
No functional changes intended.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Instead of calling cra_destroy by hand, call it through
crypto_alg_put so that the correct unwinding functions are called
through crypto_destroy_alg.
Fixes: 3d6979bf3bd5 ("crypto: api - Add cra_type->destroy hook")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Fix stream freeing crash by passing the correct pointer.
Fixes: 3d72ad46a23a ("crypto: acomp - Move stream management into scomp layer")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|