
That’s cool! waiting to look at the code :slightly_smiling_face:

Videos you try with the original ~2000 target words, this excluding the 10000 allowed but non-target ones?

@massung It seems they have a series of free-to-play games and then pushes subscription if you want to try the famous crossword. Since Wordle is “one puzzle a day” it seems the perfect fit to attract users that like word games. That is, I don’t think they will require a subscription to play Wordle.

Wordle 227 4/6
:black_large_square::black_large_square::large_yellow_square::black_large_square::black_large_square: :large_green_square::black_large_square::black_large_square::black_large_square::black_large_square: :black_large_square::large_green_square::large_green_square::black_large_square::large_green_square: :large_green_square::large_green_square::large_green_square::large_green_square::large_green_square:

Today was my starting word’s lucky day:
Wordle 227 2/6
:large_yellow_square::large_yellow_square::white_large_square::white_large_square::large_green_square: :large_green_square::large_green_square::large_green_square::large_green_square::large_green_square:

haha, nice!

Wordle 227 3/6*
:black_large_square::black_large_square::black_large_square::large_green_square::large_green_square: :black_large_square::black_large_square::large_green_square::large_green_square::large_green_square: :large_green_square::large_green_square::large_green_square::large_green_square::large_green_square:

my solver in hard mode was just as lucky as Matthew today

One thing that I want even more than the Wordle solver is the Wordle revealer. Given a solution trace and a known solution, can you reveal what words are tried in a way that is consistent with the trace?
Of course, it’s not gonna be unique, but we can still try our best.

I think someone managed to use solution traces from twitter to figure out the word of the day.

My RL/Q-learning program solving todays:
Wordle 227 4/6
:white_large_square::large_yellow_square::white_large_square::white_large_square::large_green_square: :white_large_square::large_green_square::large_green_square::large_green_square::large_green_square: :white_large_square::large_green_square::large_green_square::large_green_square::large_green_square: :large_green_square::large_green_square::large_green_square::large_green_square::large_green_square:

is it a tabular Q-learning?


yeah, pretty straight forward

i keep working on it bit-by-bit trying to improve things (mostly get down the memory footprint).

It’s fine for 500–1000 words right now, but the memory size grows way too much when talking about all 13k words. At least too much for my laptop :stuck_out_tongue:

Yeah, that becomes a fairly large table for 10 000 words :slightly_smiling_face:

right now each word is an action, i’d rather try and encode it so that letter positions are actions (limiting the action space considerable). But I haven’t yet come up with a nice way to ensure the resulting “action” is a legit word

i’m thinking of trying it where the game is a loss if it tries something that isn’t a legit word. which would would take longer to train, but might work

could work indeed. Or you just filter out the possible words, iteratively, as you take actions, and backtrack on illegal words?

that’s an interesting idea.

What I haven’t figured out is how to encode it tho. Doing 5 different actions (one per letter) is easy. But while together they may make an illegal word (and probably do given that there are 11,881,376 combinations of 5 letters), individually they may be okay. They can only be negatively reinforced as a group and not individually. Still haven’t figured out a way to do that.

I’m pretty sure the number of transitions from one letter to the next, when considering the actual list of words, should be fairly limited

that will give you just a tree where each leaf is a possible word, so the number of nodes is going to be O(N)

of course you lose some precision because words are grouped as you say, but that’s still worth a try, in particular since it clearly reduces the memory footprint to O(N)

So, that works only if you consider one of the actions “primary”, which may be a legit solution.
If you pick the letters one at a time, the first letter chosen limits what options there are for the second letter, etc. as you described.
But, I don’t know if that hurts the q-learning. Not all actions will get equal “learning time” per state. And you still have the same problem of updating the q-table(s): the actions aren’t disparate. I can’t just reward “E in the 5th position” alone (I think?).

you can bias the selection/randomization by the number of words for each action

that is, you’ve picked 3 letters, you look for the next one. if there are 10 words with A as the next letter, and 2 with E, you should pick A 4/5 of the time

Rewarding “E in the 5th” position would be a different model, but it’s still doable


knowing whether the game was in hard mode or not can make a significant difference though, but we may not always have this information from the Wordle shared picture, so I guess we need to assume easy mode, in which case the matches are independent from one another

@sorawee Well, turns out there are 999 words that are consistent with Jens Axel’s sequence, ~even in hard mode~ (edit: cleary the sequence can’t be in hard mode, due to line 3).
I can train constraining to the actual solution and see what are the possible choices maybe?

Can you provide some examples? Perhaps we can see if they are reasonable

For example, the target word could be pride
, and the game played by Jens Axel could be ("karma" "pluck" "bribe" "pride")
or for thank
("outdo" "tweed" "whack" "thank")
or for loopy
("helix" "linen" "booby" "loopy")

Clearly the game was not in hard mode though (due to line 3).

If the game is in hard mode, it may restrict the possibilities more, but your 3-line grid today may be too small to judge :slightly_smiling_face:

I think I’ll use the NY Times buyout as an excuse to not enhance my solver :) Spoiler word choices in thread:

% rlwrap racket wordle.rkt
Guess: cares
Feedback: eeeii
Guess: stile
Feedback: iieep
Guess: towse
Feedback: piepp
Guess: those
Feedback: ppppp
4

Well, one easy improvement is to use unicode instead of ipe

Can you elaborate?

I want to type something for each letter, so I’m not sure how Unicode would improve things.

The Twitter guy used more than one image.

Squares!

:black_large_square::large_green_square::large_yellow_square:

yeah, but typing square doesn’t seem so convenient :wink:

@sorawee how do you type a square?

The “Feedback” is for the human to provide feedback to the program.

copy/paste? :stuck_out_tongue:

Wordle already has the squares to share with others, so no need to duplicate that.

ahhh

Got it. I thought this is a regular game, not a solver

Stupid me. You even said at the top of the post that it’s a solver lol

The squares can be used to display the feedback.

Talk to me like I’m a square (or: Talk to me like I’m 4)


Instead of i e and p, I would use squares to show the feedback. I’d just copy-paste them.

that seems rather cumbersome

Make :clap: All :clap::skin-tone–3: Keyboards :clap::skin-tone–3: Have :clap: Wordle :clap: Keys

What do you mean? (I mean, it is only once you’ll have to paste the squares into the program generating the feedback)

@soegaard2 this is a solver, right? The inputs to the solver are feedback from the game.

Oh! I am slow. I was assuming the solver was self-testing :slightly_smiling_face:

@samdphillips We can easily remap keys though, like, say, “iep” :smile:

Don’t make me spend the next 2 hours sourcing wordle key caps :smile:

on unix you can always use xmodmap :wink:

but probably best to simply take in letters and just re-output as squares maybe?

Of course, but what if I want to see actual funny keys on my keyboard :smile:

aha! :smile:

after fixing a bug, in easy mode all target words are solved in at most 5 steps: ((1 . 1) (2 . 46) (3 . 1112) (4 . 1091) (5 . 65))

$ time racket wordle-solver.rkt --target those --consistent-only
1: #goals+: 2315 #allowed+: 10657
Make a guess:
arise
guess value: 147525
🟫🟫🟫🟩🟩
2: #goals+: 20 #allowed+: 39
Make a guess:
loose
guess value: 64
🟫🟫🟩🟩🟩
3: #goals+: 3 #allowed+: 0
Make a guess:
those
guess value: 5
🟩🟩🟩🟩🟩
Solved in 3 guesses.
'(("arise" "BBBGG") ("loose" "BBGGG") ("those" "GGGGG"))
real 0m0.462s
user 0m0.406s
sys 0m0.056s
My solver is copying you I think :)

mine is arise, close, and those :slightly_smiling_face:

ah, almost!

{Wordle} There are 124 groups of 6+ words in the list which change only by their first letter o_O

Turns out that raise
is even better than arise
: they both have the same worst case of 168 on the size of the next set of consistent solutions, but for the average case for raise
it’s 61 while for arise
it’s 63.7.
Interestingly, roate
has an even better average case of 60.4, but a worst case of 195. Also, it’s not a possible solution (only an allowed word).

Ouch

It’s probably on purpose, to make the hard mode challenging

@mflatt Wordle 227 2/6
:large_yellow_square::large_yellow_square::white_large_square::white_large_square::large_green_square: :large_green_square::large_green_square::large_green_square::large_green_square::large_green_square: :eye: :eye:

Oh, that makes perfect sense actually.

I think people might find this interesting: https://github.com/Kindelia/HVM/blob/master/HOW.md

@joseph.denman has joined the channel