Iterative Development Workflow


One of the reasons developer avoid putting any logic in the database is because it's a pain to move that logic from a file (stored in git) to the environment (the database) where it is executed. In many ways it's similar to the workflow you have when developing in a compile language like C/Java/Go where there is an additional step, compiling, before you can see the code in action. But with databases it's a bit worse because it's not as easy to move that code as a one line command to compile a C program. This is the main reason we developed the subzero-cli. The goal is to make the development workflow for writing code that lives in the database (tables/views/stored procedures/...) as close as possible to the workflow of a Ruby/PHP/Node developer. We want to be able to just save a (sql) file and have that logic immediately running in the database, ready for you to execute. While moving SQL code from files to the database is a core feature, subzero-cli does a lot more, it will also do the same thing for nginx configs and lua code (running in nginx). It will also give you a nice interface to look at the logs of each component of the stack and see the result of a HTTP call to the api in addition to creating and managing your database migrations.

Check out how the whole process looks (although you don't see in the gif the process of saving the file).

starter kit


docker pull subzerocloud/subzero-cli-tools
npm install -g subzero-cli
subzero --help # check it was installed


In the root of your project (which is a clean version of the starter kit) execute this

docker-compose -d up db postgrest openresty
subzero dashboard

Notice how we chose to bring up only a subset of the stack components since we won't be using rabbitmq for now. In another terminal window, run the following command

curl http://localhost:8080/rest/todos?select=id,todo

# result

Notice the result of this in each dashboard window/tab (OpenResty/PostgREST/PostgreSQL) each showing a log line of the event they registered. Probably the most interesting window for now will be PostgreSQL where you can see the actual queries that were executed for that REST call, which in this case look something like this (with a bit of formatting compared to the real thing)

LOG:  execute 0: BEGIN ISOLATION LEVEL READ COMMITTED READ ONLY                                                                                                                 
LOG:  statement: set local role 'anonymous';set local "request.jwt.claim.role" = 'anonymous';set local "" = 'postgrest';set local "request.header.user-agent" = 'curl/7.43.0';set local "request.header.accept" = '*/*';                                                                                                                    
LOG:  execute <unnamed>:                                                                                                                                                        
          WITH pg_source AS (
              SELECT  "api"."todos"."id", "api"."todos"."todo" FROM  "api"."todos"
            null AS total_result_set, 
            pg_catalog.count(_postgrest_t) AS page_total, 
            array[]::text[] AS header, 
            coalesce(array_to_json(array_agg(row_to_json(_postgrest_t))), '[]')::character varying AS body
          FROM ( SELECT * FROM pg_source) _postgrest_t
LOG:  execute 1: COMMIT

Now we'll change the todos view. Change this line

select id, todo, private, (owner_id = request.user_id()) as mine from data.todo;

select ('#' || id::text) as id, ('do this: ' || todo) as todo, private, (owner_id = request.user_id()) as mine from data.todo;

Save the file, then run the previous curl command

curl http://localhost:8080/rest/todos?select=id,todo

[{"id":"#1","todo":"do this: item_1"},{"id":"#3","todo":"do this: item_3"},{"id":"#6","todo":"do this: item_6"}]

And that's the gist of it. You save your code, be that SQL/Lua/Nginx and it's immediately running live in your dev stack.