
I have added to the unsorted rest. I think we need to be bolder - those two expert reviews both seem to be seeking reasons to reject. The lack of objectivity is almost palpable.

How do you suggest us be bolder? I mean, what edits do you suggest?
The one idea I have for how to be bolder is to point out how the reviews have contrary advice as to “what experimental design we ought to follow”. Would you please go into more detail about those contradictions?

(Those contradictions can be under “This perspective also addresses other reviewer questions”)

I am trying to work my way up to suggesting edits.

I do like the sentence that is in bold in the current response draft.

I agree about pointing out the contradictions in advice on experimental design. And (as I did say) that some of the advice is entirely unrealistic [and they know it]. Is it wise to admit that this is not the first review, and previous ‘expert’ reviewers gave yet more contradictory advice?

We should also be bolder about them asking about ‘completeness’ of the algorithms. An expert in the theory of PPLs may ask that, but anyone who has tried to write an optimizing compiler, for any PL, never mind a PPL, would not ask that. automatic compilation. Of course it’s going to have an incomplete optimizer. Sheesh!

Unfortunately, I need to move on to other tasks already. Hopefully someone else can attempt to integrate these ideas into the response. I’ll be back on this, but possibly not until tomorrow.