cfinegan
2017-6-16 17:35:11

@notjack raco test already allows you specify command-line arguments to forward in info.rkt so we could implement a better interface for specifying these filters (i.e. using data structures instead of raw strings) and allow users to do it either way, if that makes sense. What kind of interface would you like to see for something like this? I’m thinking along the lines of (struct test-filters (names-to-run k/v-filters files-to-run lines-to-run)). @greg Part of the idea is that users can drill down to pretty much any individual test without altering the test file itself, so you could do things like raco test my/module -- file=foo.rkt line=20 and get just that one test result. I’m not sure that environment variables can offer the same flexibility.


notjack
2017-6-16 18:48:16

@cfinegan the interface I’d ideally like is two part, where there’s a way to filter “kinds” of tests (each test has exactly 1 kind) and there’s a way to filter specific named tests


notjack
2017-6-16 18:49:27

the “kinds” part requires a programmer declare what kinds of tests they have. I think different submodules like test, integration-test, perf-test, etc. would be fine for that. Then info.rkt could declare what submodule names to look for by default (does raco test already support that?)


greg
2017-6-16 19:00:17

There is a -s command like flag e.g. raco test -s slow-test file.rkt run the slow-test submodule in file.rkt


greg
2017-6-16 19:01:07

You can specify -s mod multiple times, too.


greg
2017-6-16 19:01:18

Per command line.


notjack
2017-6-16 19:01:33

the flag is definitely useful when you’re in control of the test command, like when setting up Travis


notjack
2017-6-16 19:01:51

but I’m specifically wondering how to say “these are tests that shouldn’t run locally but should run in DrDr”


greg
2017-6-16 19:03:37

I think there a variety of usage scenarios, plus people have different kinds of projects with tests of varying number and speed, and also have seen different approaches in various languages.


greg
2017-6-16 19:04:29

So for instance I’m thinking, “why would I want to run only a test at line 20?”, but, that’s just because I haven’t been in that situation so far.


greg
2017-6-16 19:04:56

I think it’s probably good to lay out, “What are some situations where we want to run some but not all tests?”


notjack
2017-6-16 19:05:10

I think we need to write the test frameworks that would actually need filtering before figuring out filtering :)


greg
2017-6-16 19:05:12

And maybe also, “What are some existing ways you can run some but not all code?”


notjack
2017-6-16 19:05:33

@greg what do you typically do for tests that make network calls?


greg
2017-6-16 19:05:44

And maybe think about economical but convenient ways to compose all those? idk


samth
2017-6-16 19:06:19

@notjack @greg: TR has a --just flag for its test suite


notjack
2017-6-16 19:06:34

@samth what’s that do?


samth
2017-6-16 19:06:49

runs a single integration test


greg
2017-6-16 19:08:39

@notjack That’s a good example where, e.g. the AWS package, I just run the tests locally with my AWS creds and there are $$ involved, so I don’t run those on Travis CI or on the pkg build server. Also I run specific tests in the REPL, usually. In racket-mode if point is inside a module form, then when you “run” you “enter” that (sub)module. That includes test submods, of course. So I sometimes use that to run individual tests.


greg
2017-6-16 19:12:58

btw I know you can feed creds securely to Travis CI, so I could have it run tests against real AWS. I just mean that I don’t want to have Travis run those tests on 10 different Racket versions, and on every little commit, on my nickel :wink:


notjack
2017-6-16 19:13:32

@samth how large is an integration test? is it typically a couple of lines that does something heavyweight, or something module sized, or something with multiple sizable modules?


notjack
2017-6-16 19:13:41

@greg that’s definitely a good use case to keep in mind


samth
2017-6-16 19:14:05

@notjack Integration tests are by definition modules that we run to see if they typecheck/run (or if they error in the right way)


notjack
2017-6-16 19:14:48

@samth out of curiosity, could you do that as a regular check that takes a syntax object and evals it?


samth
2017-6-16 19:14:50

samth
2017-6-16 19:15:09

@notjack the code is really a lot more complicated than that


notjack
2017-6-16 19:16:54

@samth oh wow it’s spinning up multiple places and parallelizing too


samth
2017-6-16 19:17:07

yeah there’s a lot there


notjack
2017-6-16 19:17:53

and conditionally doing text output or gui output


notjack
2017-6-16 19:17:55

dang


samth
2017-6-16 19:18:03

you might also be interested in the much simpler match tests: https://github.com/racket/racket/blob/master/pkgs/racket-test/tests/match/main.rkt


samth
2017-6-16 19:18:21

the gui code is never used and probably doesn’t work


notjack
2017-6-16 19:18:31

heh


cfinegan
2017-6-16 20:16:50

I’m getting the impression that it’s more popular to organize tests into modules than to name individual tests (or groups of tests) using the testing libraries’ info tracking features. What about the ability to supply raco test a set of regular expressions that would be used to match against module names? Or would people be more likely to separate tests within the same module into named groups if there was better support for filtering upon that predicate?


samth
2017-6-16 20:17:20

cfinegan I bet better support would make it more used


cfinegan
2017-6-16 20:23:47

I agree, although it would have to be compatible with the way people organize their tests. Right now for example rackunit “names” tests after the type of test being invoked (check-equal?, check-not-false, etc.), which isn’t a super useful thing to filter upon