@samdphillips yeah i found char-not and char-not-in but those take chars, not other parsers.
ive worked around it by using those for now, just would have been … neater to do it the other way i think
as I’ve already extracted those set of chars into a parser quote/p
I’d be interested in reading this.
I’ve seen not/p
like combinator in other parser combinator systems, but there is generally a tradeoff in memory and how the input stream is managed. As @notjack said there is probably an issue with implementing it generally in MegaParsack.
@aymano.osman I’ll post a link to the github when its runnable.
Would it be possible, and if so, would it be massive undertaking, to create a match-expander that handles something like: [(list (? db/t?) (:+ number/t bytes-list))] ; matches the literal token db, followed by one or more number tokens.
I have my lexer producing a token stream lazily via a parameter, and I’m using (peek-token) and (read-token) functions to parse. Similar to using an input port. This works alright, despite being a bit ugly. For example the following potentially parses an assignment like “foo = 42” when an identifier token is encountered: (define (identifier/p name ast)
(match (stream->list (peek-tokens 2))
[(list (? equals/t?) (or (? number/t? value)
(? identifier/t? value))) (match (stream->list (read-tokens 2))
[(list _ name) (line/p (Equals name value))])]))
however, to handle the first example, you’d need to step through the stream checking a cell at a time, because (many newline/t) could be arbitrarily long. Converting the entire stream to a list would work with a … pattern, but that would completely defeat the purpose of having a lazy stream of tokens in the first place, sadly. Hope that makes sense; this is already far too long so I omitted a lot of (hopefully unnecessary for the point) code. Thank you :)
https://gist.github.com/dys-bigwig/d0545698ee1e0909ee09efbbb3690435 here’s the rest of the code (barring lexer, which is just a regular lex one) in case I really did explain it so poorly it was impossible to understand.
@sydney.lambda I think the peek-tokens
stuff you’re doing is conflicting with how megaparsack expects you to do stuff. Instead of sticking the stream in a parameter and peeking at it, you should use megaparsack’s parser combinators. See the megaparsack docs section on chaining parsers together in sequence using do
and chain
: https://docs.racket-lang.org/megaparsack/parsing-basics.html#(part._sequencing-parsers)
This is for my own parser, I’m trying to write one by hand. It would be possible without, but, most have this so it’d be good if I could implement it too.
Sorry, I’m not always the best at explaining things haha.
oh, sorry! my bad, I missed that you’re not using megaparsack at all for this. oops
No worries :) The previous conversion was about megaparsack, after all.
@sydney.lambda I’m not sure what match expander is supposed to do …