chris613
2019-9-11 07:20:34

@samdphillips yeah i found char-not and char-not-in but those take chars, not other parsers.


chris613
2019-9-11 07:20:57

ive worked around it by using those for now, just would have been … neater to do it the other way i think


chris613
2019-9-11 07:21:37

as I’ve already extracted those set of chars into a parser quote/p


aymano.osman
2019-9-11 15:43:52

I’d be interested in reading this.


samdphillips
2019-9-11 17:21:30

I’ve seen not/p like combinator in other parser combinator systems, but there is generally a tradeoff in memory and how the input stream is managed. As @notjack said there is probably an issue with implementing it generally in MegaParsack.


soegaard2
2019-9-11 18:13:50

@aymano.osman I’ll post a link to the github when its runnable.


sydney.lambda
2019-9-12 01:01:30

Would it be possible, and if so, would it be massive undertaking, to create a match-expander that handles something like: [(list (? db/t?) (:+ number/t bytes-list))] ; matches the literal token db, followed by one or more number tokens. I have my lexer producing a token stream lazily via a parameter, and I’m using (peek-token) and (read-token) functions to parse. Similar to using an input port. This works alright, despite being a bit ugly. For example the following potentially parses an assignment like “foo = 42” when an identifier token is encountered: (define (identifier/p name ast) (match (stream->list (peek-tokens 2)) [(list (? equals/t?) (or (? number/t? value) (? identifier/t? value))) (match (stream->list (read-tokens 2)) [(list _ name) (line/p (Equals name value))])])) however, to handle the first example, you’d need to step through the stream checking a cell at a time, because (many newline/t) could be arbitrarily long. Converting the entire stream to a list would work with a … pattern, but that would completely defeat the purpose of having a lazy stream of tokens in the first place, sadly. Hope that makes sense; this is already far too long so I omitted a lot of (hopefully unnecessary for the point) code. Thank you :)


sydney.lambda
2019-9-12 01:07:08

https://gist.github.com/dys-bigwig/d0545698ee1e0909ee09efbbb3690435 here’s the rest of the code (barring lexer, which is just a regular lex one) in case I really did explain it so poorly it was impossible to understand.


notjack
2019-9-12 01:23:12

@sydney.lambda I think the peek-tokens stuff you’re doing is conflicting with how megaparsack expects you to do stuff. Instead of sticking the stream in a parameter and peeking at it, you should use megaparsack’s parser combinators. See the megaparsack docs section on chaining parsers together in sequence using do and chain: https://docs.racket-lang.org/megaparsack/parsing-basics.html#(part._sequencing-parsers)


sydney.lambda
2019-9-12 01:25:42

This is for my own parser, I’m trying to write one by hand. It would be possible without, but, most have this so it’d be good if I could implement it too.


sydney.lambda
2019-9-12 01:26:33

Sorry, I’m not always the best at explaining things haha.


notjack
2019-9-12 01:28:25

oh, sorry! my bad, I missed that you’re not using megaparsack at all for this. oops


sydney.lambda
2019-9-12 01:35:11

No worries :) The previous conversion was about megaparsack, after all.


samth
2019-9-12 02:10:39

@sydney.lambda I’m not sure what match expander is supposed to do …