
@soegaard2 my personal preference, when i see “log in with x” it annoys me when im then asked to create a 2nd set of credentials. usually im using the federated login feature because i dont want to come up with new credentials for that specific service.

Thanks for the input everyone - I have something to think about.

The code github authentication needs to know the a “client secret”. Other authentication methods will require other secrets. I have put the encrypted secrets in a file “secret.rkt”. (The source is public, so I can’t put the unencrypted secrets in the file). This way I need to handle only one secret key on the server.
Is this a reasonable idea? And have I made any mistakes?
https://github.com/soegaard/web-tutorial/blob/master/listit4/app-racket-stories/secret.rkt

I usually have the secret in a file that I read in with file->value
and that is not checked in

It’s also very common to put these in environment variables.

Right. @popa.bogdanp’s recent post talked about some of that and @lexi.lambda had a post/library that related to that as well.


yes

There are pros and cons.
Files are simple - but there is a risk committing them by accident.
Environment: No risk of committing the values, but the secret keys are no longer part of the repo. (Also I am unsure how to handle the setting of these in a secure way - use a shell script?)

It’s often done in an encrypted interface in the hosting environment

see Travis secrets, for example

or Github actions

Thanks for the pointers.

Hadn’t looked at Github Actions before.

Bogdan uses Procfiles for things related to this, and recently released a racket implementation of them

There are also things like Vault for holding/accessing/managing secrets.


yes that is it.

I am pleasantly surprised it is free for open source - but a bit scared that they don’t list prices for business.

Ansible also has a vault file format, which is stored encrypted, and decrypted upon deployment, no service API needed

Am I reading the Vault page incorrectly? Is the open source version free for all (including business) ?

Yes, it’s using the MPLv2 license :+1:

Cool.

how are you planning on deploying your project?

A server on Digital Ocean.

Nginx in front of the racket web-server(s).

(Practising for another project)

If you happen to already be using docker for any parts of this, Docker Swarm has a built-in secrets system where they look like regular files that your app reads. But swarm keeps them in memory and encrypted at rest and in transit.

That sounds elegant. Do Docker containers work the same way as Droplets?

No, each container acts more like a process than like a VM. It takes some getting used to.

You’d want to put nginx and your racket webserver in separate containers, for example. Then Swarm could horizontally scale them separately.

I wouldn’t recommend going down this route unless you’re willing to set aside some time for learning docker. Not using it for this is perfectly reasonable.


served me well for my career so http://far.in\|far.in this case it would let the secret management be decoupled from the application (i.e. you can use whatever technique you like to get the secret into an env var)

@chris613 Great set of guide lines.

yeah it was onrignally a heroku blog post i think. like i said its been my own little bible for applications in production for many years, and ive seen a LOT of other setups that were way more complex & way more simple but ultimately harder to maintain

If I need to change hosting at some point, then Docker would make it easy.

The tools at DigitalOcean is directed at Droplets (VMs) though.

i think this was the original one i read https://blog.heroku.com/twelve-factor-apps

I am not quite sure how running Docker containers on DO work.

i just set down to read your post linked here @lexi.lambda and see you had beaten me to the 12factor referencing :wink:

@soegaard2 you’d just run the docker
program on your droplet

That’s it?!

Yes! With one caveat: Docker and Docker Swarm are two different things. Swarm is the official system for running a cluster of containers and restarting them when anything dies. To use Swarm, you’d run docker swarm init
in your droplet.

Deja vu. I had forgotten that DO has Kubernetes. I ignored them because the cheapest is 10$ a month and the cheapest droplet is 5$ a month.

I didn’t realize I could use Docker from within a droplet.

For context, Swarm and Kubernetes do roughly the same things. Kubernetes can do a lot more but it’s also much more complicated. Your average small web app almost certainly doesn’t need it.

I’ll put it on my (fast growing) list of things to read up on.

Right now I don’t have a real sense of how many users / how much traffic a single web-server can handle.

Most web apps are IO-bound, not CPU-bound, so database performance should be your bottleneck and not webserver performance. If it seems like the webserver is too slow, there’s a good chance its code is just blocking on things it shouldn’t be.

Or it has a memory leak somewhere

Apropos performance - is there a package that keeps track of the average response times of different routes?

I don’t think so :(

There is a package for reporting webserver exceptions to Sentry

Maybe there is an add-on for nginx ?

(thanks Bogdan!)

Probably

If you’re interested in that kind of data and monitoring, you might want to set up Prometheus

It’s a database for timeseries data and metrics. A lot of tools can export metrics about themselves to Prometheus, including docker.

There’s probably an add-on to nginx to export various useful timing statistics to Prometheus.

Thanks for the tip.

@soegaard2 the technique I used in my application is to define the keys in environment variables when I build the application and embed them in the compiled code using a macro like so:
(define-syntax (embedded-api-key stx)
(syntax-case stx ()
[_ #`(quote #,(getenv "AL2DSAPIKEY"))]))
(define (builtin-api-key) (embedded-api-key))
The resulting executable (or byte code files) now have the key inside and you don’t need to worry about acciedentally pushing the keys to the repository singe git cannot (yet) version control environment variables.
The disadvantage is that you’ll have to re-build the application if the key changes, but they don’t change very often.

@alexharsanyi I like that solution!

wait, doesn’t that mean your app’s bytecode will contain the key in a way that can be extracted?

I suppose whether or not that matters to you depends on your approach to deployment and your threat model

Yes. Do you know how to extract it?
My use case is shipping a built application with a private API key that users can install on their own machines. Sure, I could encrypt the key, but that you can than just extract the encryption key. I am interested to know how others do it, but my impression is that the only applications that are not cracked are the ones that are not worth cracking :slightly_smiling_face: , so I settled for a good enough solution not for a perfect solution. I can also revoke the API keys if they become compromised, although in my case, I am just use the free tier of them, so it might be simpler for someone just to get a free set of API keys for themselves than extract them from my application.
And for a web service it does not matter as most users won’t have access to the binary anyway — of course this solution does not work if your application uses different keys for staging and production.

for desktop apps you’re totally right that it’s not a security issue, yeah

since there is no possible way to keep any secrets needed by the app out of the hands of its user

The OAuth2 RFC acknowledges that installed applications can’t keep secrets: https://tools.ietf.org/html/rfc6749#section-2.1. My brief and not-recent OAuth2 experience is mostly with Google, and they treat installed applications slightly differently from server-side applications. Seems like github wants to ignore them, though.

Weird posting in Freelancer: https://www.freelancer.com/projects/python/DrRacket-Program-21482370/