| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| gmrtd is a Go library for reading Machine Readable Travel Documents (MRTDs). Prior to version 0.17.2, ReadFile accepts TLVs with lengths that can range up to 4GB, which can cause unconstrained resource consumption in both memory and cpu cycles. ReadFile can consume an extended TLV with lengths well outside what would be available in ICs. It can accept something all the way up to 4GB which would take too many iterations in 256 byte chunks, and would also try to allocate memory that might not be available in constrained environments like phones. Or if an API sends data to ReadFile, the same problem applies. The very small chunked read also locks the goroutine in accepting data for a very large number of iterations. projects using the gmrtd library to read files from NFCs can experience extreme slowdowns or memory consumption. A malicious NFC can just behave like the mock transceiver described above and by just sending dummy bytes as each chunk to be read, can make the receiving thread unresponsive and fill up memory on the host system. Version 0.17.2 patches the issue. |
| Every uncached /avatar/:hash request spawns a goroutine that refreshes the Gravatar image. If the refresh sits in the 10-slot worker queue longer than three seconds, the handler times out and stops listening for the result, so that goroutine blocks forever trying to send on an unbuffered channel. Sustained traffic with random hashes keeps tripping this timeout, so goroutine count grows linearly, eventually exhausting memory and causing Grafana to crash on some systems. |
| Missing Release of Memory after Effective Lifetime vulnerability in Is-Daouda is-Engine.This issue affects is-Engine: before 3.3.4. |
| Missing Release of Memory after Effective Lifetime vulnerability in ydb-platform ydb (contrib/libs/yajl modules). This vulnerability is associated with program files yail_tree.C.
This issue affects ydb: through 24.4.4.2. |
| Suricata is a network IDS, IPS and NSM engine. Prior to versions 8.0.3 and 7.0.14, crafted DCERPC traffic can cause Suricata to expand a buffer w/o limits, leading to memory exhaustion and the process getting killed. While reported for DCERPC over UDP, it is believed that DCERPC over TCP and SMB are also vulnerable. DCERPC/TCP in the default configuration should not be vulnerable as the default stream depth is limited to 1MiB. Versions 8.0.3 and 7.0.14 contain a patch. Some workarounds are available. For DCERPC/UDP, disable the parser. For DCERPC/TCP, the `stream.reassembly.depth` setting will limit the amount of data that can be buffered. For DCERPC/SMB, the `stream.reassembly.depth` can be used as well, but is set to unlimited by default. Imposing a limit here may lead to loss of visibility in SMB. |
| Suricata is a network IDS, IPS and NSM engine. Prior to versions 8.0.3 and 7.0.14, specially crafted traffic can cause Suricata to consume large amounts of memory while parsing DNP3 traffic. This can lead to the process slowing down and running out of memory, potentially leading to it getting killed by the OOM killer. Versions 8.0.3 or 7.0.14 contain a patch. As a workaround, disable the DNP3 parser in the suricata yaml (disabled by default). |
| A denial-of-service vulnerability exists in the NetX IPv6 component functionality of Eclipse ThreadX NetX Duo. A specially crafted network packet of "Packet Too Big" with more than 15 different source address can lead to denial of service. An attacker can send a malicious packet to trigger this vulnerability. |
| In the Linux kernel, the following vulnerability has been resolved:
ice: fix Rx page leak on multi-buffer frames
The ice_put_rx_mbuf() function handles calling ice_put_rx_buf() for each
buffer in the current frame. This function was introduced as part of
handling multi-buffer XDP support in the ice driver.
It works by iterating over the buffers from first_desc up to 1 plus the
total number of fragments in the frame, cached from before the XDP program
was executed.
If the hardware posts a descriptor with a size of 0, the logic used in
ice_put_rx_mbuf() breaks. Such descriptors get skipped and don't get added
as fragments in ice_add_xdp_frag. Since the buffer isn't counted as a
fragment, we do not iterate over it in ice_put_rx_mbuf(), and thus we don't
call ice_put_rx_buf().
Because we don't call ice_put_rx_buf(), we don't attempt to re-use the
page or free it. This leaves a stale page in the ring, as we don't
increment next_to_alloc.
The ice_reuse_rx_page() assumes that the next_to_alloc has been incremented
properly, and that it always points to a buffer with a NULL page. Since
this function doesn't check, it will happily recycle a page over the top
of the next_to_alloc buffer, losing track of the old page.
Note that this leak only occurs for multi-buffer frames. The
ice_put_rx_mbuf() function always handles at least one buffer, so a
single-buffer frame will always get handled correctly. It is not clear
precisely why the hardware hands us descriptors with a size of 0 sometimes,
but it happens somewhat regularly with "jumbo frames" used by 9K MTU.
To fix ice_put_rx_mbuf(), we need to make sure to call ice_put_rx_buf() on
all buffers between first_desc and next_to_clean. Borrow the logic of a
similar function in i40e used for this same purpose. Use the same logic
also in ice_get_pgcnts().
Instead of iterating over just the number of fragments, use a loop which
iterates until the current index reaches to the next_to_clean element just
past the current frame. Unlike i40e, the ice_put_rx_mbuf() function does
call ice_put_rx_buf() on the last buffer of the frame indicating the end of
packet.
For non-linear (multi-buffer) frames, we need to take care when adjusting
the pagecnt_bias. An XDP program might release fragments from the tail of
the frame, in which case that fragment page is already released. Only
update the pagecnt_bias for the first descriptor and fragments still
remaining post-XDP program. Take care to only access the shared info for
fragmented buffers, as this avoids a significant cache miss.
The xdp_xmit value only needs to be updated if an XDP program is run, and
only once per packet. Drop the xdp_xmit pointer argument from
ice_put_rx_mbuf(). Instead, set xdp_xmit in the ice_clean_rx_irq() function
directly. This avoids needing to pass the argument and avoids an extra
bit-wise OR for each buffer in the frame.
Move the increment of the ntc local variable to ensure its updated *before*
all calls to ice_get_pgcnts() or ice_put_rx_mbuf(), as the loop logic
requires the index of the element just after the current frame.
Now that we use an index pointer in the ring to identify the packet, we no
longer need to track or cache the number of fragments in the rx_ring. |
| In Eclipse Jetty, versions <=9.4.57, <=10.0.25, <=11.0.25, <=12.0.21, <=12.1.0.alpha2, an HTTP/2 client may trigger the server to send RST_STREAM frames, for example by sending frames that are malformed or that should not be sent in a particular stream state, therefore forcing the server to consume resources such as CPU and memory.
For example, a client can open a stream and then send WINDOW_UPDATE frames with window size increment of 0, which is illegal.
Per specification https://www.rfc-editor.org/rfc/rfc9113.html#name-window_update , the server should send a RST_STREAM frame.
The client can now open another stream and send another bad WINDOW_UPDATE, therefore causing the server to consume more resources than necessary, as this case does not exceed the max number of concurrent streams, yet the client is able to create an enormous amount of streams in a short period of time.
The attack can be performed with other conditions (for example, a DATA frame for a closed stream) that cause the server to send a RST_STREAM frame.
Links:
* https://github.com/jetty/jetty.project/security/advisories/GHSA-mmxm-8w33-wc4h |
| In the Linux kernel, the following vulnerability has been resolved:
smb: client: fix smbdirect_recv_io leak in smbd_negotiate() error path
During tests of another unrelated patch I was able to trigger this
error: Objects remaining on __kmem_cache_shutdown() |
| In the Linux kernel, the following vulnerability has been resolved:
mm/kmemleak: avoid soft lockup in __kmemleak_do_cleanup()
A soft lockup warning was observed on a relative small system x86-64
system with 16 GB of memory when running a debug kernel with kmemleak
enabled.
watchdog: BUG: soft lockup - CPU#8 stuck for 33s! [kworker/8:1:134]
The test system was running a workload with hot unplug happening in
parallel. Then kemleak decided to disable itself due to its inability to
allocate more kmemleak objects. The debug kernel has its
CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE set to 40,000.
The soft lockup happened in kmemleak_do_cleanup() when the existing
kmemleak objects were being removed and deleted one-by-one in a loop via a
workqueue. In this particular case, there are at least 40,000 objects
that need to be processed and given the slowness of a debug kernel and the
fact that a raw_spinlock has to be acquired and released in
__delete_object(), it could take a while to properly handle all these
objects.
As kmemleak has been disabled in this case, the object removal and
deletion process can be further optimized as locking isn't really needed.
However, it is probably not worth the effort to optimize for such an edge
case that should rarely happen. So the simple solution is to call
cond_resched() at periodic interval in the iteration loop to avoid soft
lockup. |
| A denial of service vulnerability exists in Next.js versions with Partial Prerendering (PPR) enabled when running in minimal mode. The PPR resume endpoint accepts unauthenticated POST requests with the `Next-Resume: 1` header and processes attacker-controlled postponed state data. Two closely related vulnerabilities allow an attacker to crash the server process through memory exhaustion:
1. **Unbounded request body buffering**: The server buffers the entire POST request body into memory using `Buffer.concat()` without enforcing any size limit, allowing arbitrarily large payloads to exhaust available memory.
2. **Unbounded decompression (zipbomb)**: The resume data cache is decompressed using `inflateSync()` without limiting the decompressed output size. A small compressed payload can expand to hundreds of megabytes or gigabytes, causing memory exhaustion.
Both attack vectors result in a fatal V8 out-of-memory error (`FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory`) causing the Node.js process to terminate. The zipbomb variant is particularly dangerous as it can bypass reverse proxy request size limits while still causing large memory allocation on the server.
To be affected you must have an application running with `experimental.ppr: true` or `cacheComponents: true` configured along with the NEXT_PRIVATE_MINIMAL_MODE=1 environment variable.
Strongly consider upgrading to 15.6.0-canary.61 or 16.1.5 to reduce risk and prevent availability issues in Next applications. |
| A denial of service vulnerability exists in self-hosted Next.js applications that have `remotePatterns` configured for the Image Optimizer. The image optimization endpoint (`/_next/image`) loads external images entirely into memory without enforcing a maximum size limit, allowing an attacker to cause out-of-memory conditions by requesting optimization of arbitrarily large images. This vulnerability requires that `remotePatterns` is configured to allow image optimization from external domains and that the attacker can serve or control a large image on an allowed domain.
Strongly consider upgrading to 15.5.10 or 16.1.5 to reduce risk and prevent availability issues in Next applications. |
| Multiple denial of service vulnerabilities exist in React Server Components, affecting the following packages: react-server-dom-parcel, react-server-dom-turbopack, react-server-dom-webpack.
The vulnerabilities are triggered by sending specially crafted HTTP requests to Server Function endpoints, and could lead to server crashes, out-of-memory exceptions or excessive CPU usage; depending on the vulnerable code path being exercised, the application configuration and application code.
Strongly consider upgrading to the latest package versions to reduce risk and prevent availability issues in applications using React Server Components. |
| A flaw was found in Undertow where malformed client requests can trigger server-side stream resets without triggering abuse counters. This issue, referred to as the "MadeYouReset" attack, allows malicious clients to induce excessive server workload by repeatedly causing server-side stream aborts. While not a protocol bug, this highlights a common implementation weakness that can be exploited to cause a denial of service (DoS). |
| In the Linux kernel, the following vulnerability has been resolved:
io_uring: fix fget leak when fs don't support nowait buffered read
Heming reported a BUG when using io_uring doing link-cp on ocfs2. [1]
Do the following steps can reproduce this BUG:
mount -t ocfs2 /dev/vdc /mnt/ocfs2
cp testfile /mnt/ocfs2/
./link-cp /mnt/ocfs2/testfile /mnt/ocfs2/testfile.1
umount /mnt/ocfs2
Then umount will fail, and it outputs:
umount: /mnt/ocfs2: target is busy.
While tracing umount, it blames mnt_get_count() not return as expected.
Do a deep investigation for fget()/fput() on related code flow, I've
finally found that fget() leaks since ocfs2 doesn't support nowait
buffered read.
io_issue_sqe
|-io_assign_file // do fget() first
|-io_read
|-io_iter_do_read
|-ocfs2_file_read_iter // return -EOPNOTSUPP
|-kiocb_done
|-io_rw_done
|-__io_complete_rw_common // set REQ_F_REISSUE
|-io_resubmit_prep
|-io_req_prep_async // override req->file, leak happens
This was introduced by commit a196c78b5443 in v5.18. Fix it by don't
re-assign req->file if it has already been assigned.
[1] https://lore.kernel.org/ocfs2-devel/ab580a75-91c8-d68a-3455-40361be1bfa8@linux.alibaba.com/T/#t |
| A Missing Release of Memory after Effective Lifetime vulnerability in the Packet Forwarding Engine (PFE) of Juniper Networks Junos OS and Junos OS Evolved allows an adjacent, unauthenticated attacker to cause an FPC to crash, leading to Denial of Service (DoS).
On all Junos OS and Junos OS Evolved platforms, in an EVPN-VXLAN scenario, when specific ARP packets are received on an IPv4 network, or specific NDP packets are received on an IPv6 network, kernel heap memory leaks, which eventually leads to an FPC crash and restart.
This issue does not affect MX Series platforms.
Heap size growth on FPC can be seen using below command.
user@host> show chassis fpc
Temp CPU Utilization (%) CPU Utilization (%) Memory Utilization (%)
Slot State (C) Total Interrupt 1min 5min 15min DRAM (MB) Heap Buffer
0 Online 45 3 0 2 2 2 32768 19 0 <<<<<<< Heap increase in all fPCs
This issue affects Junos OS:
* All versions before 21.2R3-S7,
* 21.4 versions before 21.4R3-S4,
* 22.2 versions before 22.2R3-S1,
* 22.3 versions before 22.3R3-S1,
* 22.4 versions before 22.4R2-S2, 22.4R3.
and Junos OS Evolved:
* All versions before 21.2R3-S7-EVO,
* 21.4-EVO versions before 21.4R3-S4-EVO,
* 22.2-EVO versions before 22.2R3-S1-EVO,
* 22.3-EVO versions before 22.3R3-S1-EVO,
* 22.4-EVO versions before 22.4R3-EVO. |
| A Missing Release of Memory after Effective Lifetime vulnerability in the Juniper Tunnel Driver (jtd) of Juniper Networks Junos OS Evolved allows an unauthenticated network-based attacker to cause Denial of Service.
Receipt of specifically malformed IPv6 packets, destined to the device, causes kernel memory to not be freed, resulting in memory exhaustion leading to a system crash and Denial of Service (DoS). Continuous receipt and processing of these packets will continue to exhaust kernel memory, creating a sustained Denial of Service (DoS) condition.
This issue only affects systems configured with IPv6.
This issue affects Junos OS Evolved:
* from 22.4-EVO before 22.4R3-S5-EVO,
* from 23.2-EVO before 23.2R2-S2-EVO,
* from 23.4-EVO before 23.4R2-S2-EVO,
* from 24.2-EVO before 24.2R1-S2-EVO, 24.2R2-EVO.
This issue does not affect Juniper Networks Junos OS Evolved versions prior to 22.4R1-EVO. |
| A Missing Release of Memory after Effective Lifetime vulnerability in the packet forwarding engine (PFE) of Juniper Networks Junos OS on MX Series allows an unauthenticated adjacent attacker to cause a Denial-of-Service (DoS).
In a subscriber management scenario, login/logout activity triggers a memory leak, and the leaked memory gradually increments and eventually results in a crash.
user@host> show chassis fpc
Temp CPU Utilization (%) CPU Utilization (%) Memory Utilization (%)
Slot State (C) Total Interrupt 1min 5min 15min DRAM (MB) Heap Buffer
2 Online 36 10 0 9 8 9 32768 26 0
This issue affects Junos OS on MX Series:
* All versions before 21.2R3-S9
* from 21.4 before 21.4R3-S10
* from 22.2 before 22.2R3-S6
* from 22.4 before 22.4R3-S5
* from 23.2 before 23.2R2-S3
* from 23.4 before 23.4R2-S3
* from 24.2 before 24.2R2. |
| A Missing Release of Memory after Effective Lifetime vulnerability in the Packet Forwarding Engine (PFE) of the Juniper Networks Junos OS on the MX Series platforms with Trio-based FPCs allows an unauthenticated, adjacent attacker to cause a Denial of Service (DoS).
In case of channelized Modular Interface Cards (MICs), every physical interface flap operation will leak heap memory. Over a period of time, continuous physical interface flap operations causes local FPC to eventually run out of memory and crash.
Below CLI command can be used to check the memory usage over a period of time:
user@host> show chassis fpc
Temp CPU Utilization (%) CPU Utilization (%) Memory
Utilization (%)
Slot State (C) Total Interrupt 1min 5min
15min DRAM (MB) Heap Buffer
0
Online 43 41
2 2048 49 14
1
Online 43 41
2
2048 49 14
2
Online 43 41
2
2048 49 14
This issue affects Junos OS on MX Series:
* All versions before 21.2R3-S7,
* from 21.4 before 21.4R3-S6,
* from 22.1 before 22.1R3-S5,
* from 22.2 before 22.2R3-S3,
* from 22.3 before 22.3R3-S2,
* from 22.4 before 22.4R3,
* from 23.2 before 23.2R2,
* from 23.4 before 23.4R2. |