I have 4 Nginx workers and 4 unicorn workers. We hit a concurrency issue in some of our models that validate unique names. We are getting duplicated names when we send multiple requests at the same time on the same resource.
Here's some context
Model (simplify)
class License < ActiveRecord::Base
validates :serial_number, :uniqueness => true
end
Unicorn.rb
APP_PATH = '.../manager'
worker_processes 4
working_directory APP_PATH # available in 0.94.0+
listen ".../manager/tmp/sockets/manager_rails.sock", backlog: 1024
listen 8080, :tcp_nopush => true # uncomment to listen to TCP port as well
timeout 600
pid "#{APP_PATH}/tmp/pids/unicorn.pid"
stderr_path "#{APP_PATH}/log/unicorn.stderr.log"
stdout_path "#{APP_PATH}/log/unicorn.stdout.log"
preload_app true
GC.copy_on_write_friendly = true if GC.respond_to?(:copy_on_write_friendly=)
check_client_connection false
run_once = true
before_fork do |server, worker|
ActiveRecord::Base.connection.disconnect! if defined?(ActiveRecord::Base)
MESSAGE_QUEUE.close
end
after_fork do |server, worker|
ActiveRecord::Base.establish_connection if defined?(ActiveRecord::Base)
end
Nginx.conf (simplify)
worker_processes 4;
events {
multi_accept off;
worker_connections 1024;
use epoll;
accept_mutex off;
}
upstream app_server {
server unix:/home/blueserver/symphony/manager/tmp/sockets/manager_rails_write.sock fail_timeout=0;
}
try_files $uri @app;
location @app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
proxy_pass http://app_server;
}
Every time I send multiple requests (around 10+) to create Licenses I get some duplicates. I understand why. It's because each unicorn process doesn't have a resource with the serial_number created yet and it might create it multiple times...
ActiveRecord is validating the uniqueness of the field at the application level rather than a database level. One workaround could be moving the validations to the database (but it will be very cumbersome)
Another workaround is to limit the write requests (POST/PUT/DELETE) to only one unicorn and have multiple unicorns to reply to read requests (GET)... Something like this in the location in Nginx...
# 4 unicorn workers for GET requests
proxy_pass http://app_read_server;
# 1 unicorn worker for POST/PUT/DELETE requests
limit_except GET {
proxy_pass http://app_write_server;
}
This fixes the concurrency issue. However, one write server is not enough to reply at peak times.
Any idea to solve the concurrency and scalability issues with Nginx+Unicorn?
Aucun commentaire:
Enregistrer un commentaire