Age | Commit message (Collapse) | Author |
|
|
|
See https://github.com/jgm/cmark/issues/206.
|
|
as documented!
Closes #202.
|
|
|
|
|
|
The overflow could occur in the following condition:
the buffer ends with `\r` and the next memory address
contains `\n`.
Closes #184.
|
|
This reverts commit 9e643720ec903f3b448bd2589a0c02c2514805ae.
|
|
This reverts commit 4fbe344df43ed7f60a3d3a53981088334cb709fc.
|
|
* Improve strbuf guarantees
Introduce BUFSIZE_MAX macro and make sure that the strbuf implementation
can handle strings up to this size.
* Abort early if document size exceeds internal limit
* Change types for source map offsets
Switch to size_t for the public API, making the public headers
C89-compatible again.
Switch to bufsize_t internally, reducing memory usage and improving
performance on 32-bit platforms.
* Make parser return NULL on internal index overflow
Make S_parser_feed set an error and ignore subsequent chunks if the
total input document size exceeds an internal limit. Make
cmark_parser_finish return NULL if an error was encountered. Add
public API functions to retrieve error code and error message.
strbuf overflow in renderers and OOM in parser or renderers still
cause an abort.
|
|
* open_new_blocks: always create child before advancing offset
* Source map
* Extent's typology
* In-depth python bindings
|
|
The `alloc` member wasn't initialized.
This also allows to add an assertion in `chunk_rtrim` which doesn't
work for alloced chunks.
|
|
|
|
- only check once for "not at end of line"
- check for null before we check for newline characters (the
previous patch would fail for NULL + CR)
See #160.
|
|
|
|
Fix memory leak in list parsing
|
|
If `parse_list_marker` returns 1, but the second part of the `&&` clause
is false, we leak `data` here.
|
|
|
|
|
|
Fixes #142.
|
|
|
|
|
|
Closes #141.
|
|
|
|
...they start with 1.
|
|
|
|
|
|
|
|
|
|
Reduce the storage size for the `cmark_code` struct
|
|
Save node information in flags instead of using one boolean for each
property.
|
|
|
|
|
|
The previous work for unbounded memory usage and overflows on the buffer
API had several shortcomings:
1. The total size of the buffer was limited by arbitrarily small
precision on the storage type for buffer indexes (typedef'd as
`bufsize_t`). This is not a good design pattern in secure applications,
particualarly since it requires the addition of helper functions to cast
to/from the native `size` types and the custom type for the buffer, and
check for overflows.
2. The library was calling `abort` on overflow and memory allocation
failures. This is not a good practice for production libraries, since it
turns a potential RCE into a trivial, guaranteed DoS to the whole
application that is linked against the library. It defeats the whole
point of performing overflow or allocation checks when the checks will
crash the library and the enclosing program anyway.
3. The default size limits for buffers were essentially unbounded
(capped to the precision of the storage type) and could lead to DoS
attacks by simple memory exhaustion (particularly critical in 32-bit
platforms). This is not a good practice for a library that handles
arbitrary user input.
Hence, this patchset provides slight (but in my opinion critical)
improvements on this area, copying some of the patterns we've used in
the past for high throughput, security sensitive Markdown parsers:
1. The storage type for buffer sizes is now platform native (`ssize_t`).
Ideally, this would be a `size_t`, but several parts of the code expect
buffer indexes to be possibly negative. Either way, switching to a
`size` type is an strict improvement, particularly in 64-bit platforms.
All the helpers that assured that values cannot escape the `size` range
have been removed, since they are superfluous.
2. The overflow checks have been removed. Instead, the maximum size for
a buffer has been set to a safe value for production usage (32mb) that
can be proven not to overflow in practice. Users that need to parse
particularly large Markdown documents can increase this value. A static,
compile-time check has been added to ensure that the maximum buffer size
cannot overflow on any growth operations.
3. The library no longer aborts on buffer overflow. The CMark library
now follows the convention of other Markdown implementations (such as
Hoedown and Sundown) and silently handles buffer overflows and
allocation failures by dropping data from the buffer. The result is
that pathological Markdown documents that try to exploit the library
will instead generate truncated (but valid, and safe) outputs.
All tests after these small refactorings have been verified to pass.
---
NOTE: Regarding 32 bit overflows, generating test cases that crash the
library is trivial (any input document larger than 2gb will crash
CMark), but most Python implementations have issues with large strings
to begin with, so a test case cannot be added to the pathological tests
suite, since it's written in Python.
|
|
|
|
This change allows us to pass the new test introduced in
75f231503d2b5854f1ff517402d2751811295bf7.
Previously when a list marker was followed only by spaces,
cmark expected the following content to be indented by
the same number of spaces. But in this case we should
treat the line just like a blank line and set list padding
accordingly.
|
|
Adds an internal field to the parser struct to keep track
of last_buffer_ended_with_cr.
|
|
Fixes issue #114.
|
|
Newer MSVC versions support enough of C99 to be able to compile cmark
in plain C mode. Only the "inline" keyword is still unsupported.
We have to use "__inline" instead.
|
|
|
|
|
|
This reverts commit 4d2d486333c358eb3adf3d0649163e319a3b8b69.
This commit caused a valgrind invalid read.
==29731== Invalid read of size 4
==29731== at 0x40500E: S_process_line (blocks.c:1050)
==29731== by 0x403CF7: S_parser_feed (blocks.c:526)
==29731== by 0x403BC9: cmark_parser_feed (blocks.c:494)
==29731== by 0x433A95: main (main.c:168)
==29731== Address 0x51d5b60 is 64 bytes inside a block of size 128 free'd
==29731== at 0x4C27D4E: free (vg_replace_malloc.c:427)
==29731== by 0x4015F0: S_free_nodes (node.c:134)
==29731== by 0x401634: cmark_node_free (node.c:142)
==29731== by 0x4033B1: finalize (blocks.c:259)
==29731== by 0x40365E: add_child (blocks.c:337)
==29731== by 0x4046D8: try_new_container_starts (blocks.c:836)
==29731== by 0x404F12: S_process_line (blocks.c:1015)
==29731== by 0x403CF7: S_parser_feed (blocks.c:526)
==29731== by 0x403BC9: cmark_parser_feed (blocks.c:494)
==29731== by 0x433A95: main (main.c:168)
|
|
|
|
|
|
|
|
|
|
|
|
It's a programming error if the type is out of range.
|
|
https://github.com/MathieuDuponchelle/cmark into MathieuDuponchelle-refactor-S_processLine
|
|
|
|
It's the core of the program and I had too much trouble making
sense of it, two loops with many cases and other code
interspersed hurt my head.
All the tests still passed before rebasing, now I've got the
exact same set of issues as master.
|