Quantcast
Channel: Active questions tagged redis+ruby-on-rails - Stack Overflow
Viewing all 884 articles
Browse latest View live

How to restart Resque workers automatically when Redis server restarts

$
0
0

I have a Rails application that runs jobs on the background using the Resque adapter. I have noticed that once a couple of days my workers disappear (just stop), my jobs get stuck in the queue, and I have to restart the workers anew every time they stop.

I check using ps -e -o pid,command | grep [r]esque and launch workers in the background using (RAILS_ENV=production PIDFILE=./resque.pid BACKGROUND=yes bundle exec rake resque:workers QUEUE='*' COUNT='12') 2>&1 | tee -a log/resque.log.

Then I stopped redis-server using /etc/init.d/redis-server stop and again checked the worker processes. They disappeared.

This gives a reason to think that worker processes stop maybe because the redis server restarting because of some reason.

Is there any Rails/Ruby way solution to this problem? What comes to my mind is writing a simple Ruby code that would watch the worker processes with the period, say, 5 seconds, and restart them if they stop.

UPDATE: I don't want to use tools such as Monit, God, eye, and etc. They are not reliable. Then I will need to watch them too. Something like to install God to manage Resque workers, then install Monit to watch God, ...

UPDTAE This is what I am using and it is really working. I manually stoped redis-server and then started it again. This script successfully launched the workers.

require 'logger'

module Watch

  def self.workers_dead?
   processes = `ps -e -o pid,command | grep [r]esque`
   return true if processes.empty?
   false
  end

  def self.check(time_interval)
   logger = Logger.new('watch.log', 'daily')
   logger.info("Starting watch")

   while(true) do
     if workers_dead?
       logger.warn("Workers are dead")
       restart_workers(logger)
     end
     sleep(time_interval)
   end
  end

  def self.restart_workers(logger)
   logger.info("Restarting workers...")
   `cd /var/www/agts-api && (RAILS_ENV=production PIDFILE=./resque.pid BACKGROUND=yes rake resque:workers QUEUE='*' COUNT='12') 2>&1 | tee -a log/resque.log`
  end

end

Process.daemon(true,true)
pid_file = File.dirname(__FILE__) + "#{__FILE__}.pid"
File.open(pid_file, 'w') { |f| f.write Process.pid }
Watch.check 10

Using Redis to track worker progress

$
0
0

Using Rails 6. I have a long time Sidekiq worker, that can take up to 4-6 hours depending on the load it is given. But if the process stops due to an error or daily restarts, I end up loosing the progress made so far. I have been trying to implement a good caching mechanism because its basically a big loop and I can keep track and resume if I have the value of the loop when execution was stopped. Before I was only saving it the loop value in a variable and after say 200 updates, transfer it to Postgres database. Note: each loop iteration can take up to half a second on average. But I can still end up loosing the track if execution is stopped between the 200 iterations. I want to keep track of every single iteration. Should I use Redis to cache the loop iterator to prevent from loosing track? I can catch any error using exception handling and update the iterator value in PG. But can the SIGTERM signal from dyno restart/update be caught somehow? Or should I rely on Redis? Maybe a better way?

Which Redis commands does Sidekiq use?

$
0
0

So, I renamed several dangerous Redis commands in redis.conf, after that Sidekiq WEB UI stopped showing any statistics yet Sidekiq is functioning fine. My question is which Redis commands does Sidekiq use? Which should I specify are renamed when configuring server and client inside Sidekiq initializer?

Rails redis-server throws maxmemory error. Redis::CommandError (OOM command not allowed when used memory > 'maxmemory'.):

$
0
0

I'm still getting the following error message

Redis::CommandError (OOM command not allowed when used memory > 'maxmemory'.):

Even after setting maxmemory to 80gb inside my redis.conf file. When I check the value for maxmemory inside the redis cli I get 85899345920.

I'm starting my redis server with sudo redis-server config/redis.conf

Any help or guidance is appreciated, thank you.

EDIT 1:

info memory gives me:

# Memory
used_memory:1061024
used_memory_human:1.01M
used_memory_rss:2265088
used_memory_rss_human:2.16M
used_memory_peak:1064784
used_memory_peak_human:1.02M
used_memory_peak_perc:99.65%
used_memory_overhead:1047768
used_memory_startup:981280
used_memory_dataset:13256
used_memory_dataset_perc:16.62%
total_system_memory:17179869184
total_system_memory_human:16.00G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:85899345920
maxmemory_human:80.00G
maxmemory_policy:allkeys-lru
mem_fragmentation_ratio:2.13
mem_allocator:libc
active_defrag_running:0
lazyfree_pending_objects:0

redis.conf

# Generated by CONFIG REWRITE
dir "/Users/hugohouyez/code/hugoh1995/dogtime/dogtime"
maxmemory 80gb
maxmemory-policy allkeys-lru

Action Cable only sending welcome and ping message to client

$
0
0

I am able to get my Action Cable working and the subscription to work without any dramas...

For now, I have a simple HTML page with some JS on it to connect and log the WebSocket messages.

This is the script I'm using (for now to test) on my HTML page

    var socket = new WebSocket('ws://localhost:3000/cable');
    socket.onopen = function(e) {
        console.log("Connection established")
    }
    socket.onclose = function(e) {
        console.log("Error occurred.");
        clients.Remove(socket)  
    }
    socket.onmessage = function(e){
        var server_message = e;
        console.log(server_message);
    }

    JSON.stringify(
        {
            "command": "message",
            "identifier": JSON.stringify({"channel": "tv_channel_1e80f418-08b0-478a-a77c-67a052934433" }),
            "data": {
                "action": "start_broadcasting"
            }
         }  
    ) 

My rails output has confirmed that the subscription is there and my page is console logging the ping messages and the welcome message from the websocket.

If I try to broadcast anything even as a test (through the console or otherwise)

ActionCable.server.broadcast "tv_channel_1e80f418-08b0-478a-a77c-67a052934433", {message: "oh hai"}

it does not get picked up by the client...

I have Redis all installed via brew and the following in my cable.yml file

development: &development
  adapter: redis
  url: redis://localhost:6379

test:
  adapter: redis
  url: redis://localhost:6379

production:
  adapter: redis
  url: <%= ENV.fetch("REDIS_URL") { "redis://localhost:6379/1" } %>
  channel_prefix: can-i-haz-dashboard-server_production

I'm able to confirm through redis-cli that the messages are going through to the redis db.

The only answers I can find involve installing redis.

My connection.rb file (channel coded for testing)

module ApplicationCable
  class Connection < ActionCable::Connection::Base
    identified_by :current_channel

    def connect
      self.current_channel = ::Channel.find("1e80f418-08b0-478a-a77c-67a052934433")
    end

    private
      def find_channel

      end
  end
end

my TvChannel.rb

class TvChannel < ApplicationCable::Channel
  def subscribed
    # stream_from "some_channel"
    stream_from "tv_channel_#{current_channel.id}"
    ActionCable.server.broadcast "tv_channel_#{current_channel.id}", 'Test message'
  end

  def unsubscribed
    # Any cleanup needed when channel is unsubscribed
  end

  def start_broadcasting
    "hello"
  end

end

I hope what I'm doing wrong is simple. Any help would be greatly appreciated.


Edit: Sorry, not sure if I was clear enough. My Rails app is API only for the websocket, there is no view I'm serving it to. The HTML page is standalone and not in the Rails environment.

Am I using connection pool correctly?

$
0
0

I have a rails ActiveRecord Model which is stored in MongoDB. Upon create/update/delete I have to update Redis data based on the model data. It's not a cache data we need it to be present.

Current code is something like this

Class Person
  include Mongoid::Document

  after_save do
    REDIS_POOL.with { |conn| do_something(conn, instance) }
  end
end

The problem here is when we do update multiple records in a loop I believe we would be creating unnecessary Redis connection which could have been done with a single connection.

Is there any way to avoid it?

Redis with anycable causes error "Subscribe error: Application error: ERR max number of clients reached"

$
0
0

I'm trying to implement a feature where a user will know if someone is online after they log in. I'm using Anycable for the WebSocket. When the user visits the website, WebSocket will add the user id to Redis and will delete if the user disconnects to the website. Now, after some time or if I refresh the website multiple times it throws an error "Subscribe error: Application error: ERR max number of clients reached". The website is deployed on heroku.

I commented the part of adding and deleting the user id to Redis and didn't throw any errors anymore.

def subscribed

    stream_from "appearances_channel"

    redis = Redis.new

    puts user_id

    online = true

    puts online

    online ? redis.set("user_#{user_id}_online", "1") : redis.del("user_#{user_id}_online")

    ActionCable.server.broadcast "appearances_channel",
                                 user_id: user_id,
                                 online: online
  end

  def unsubscribed

    redis = Redis.new

    online = false

    online ? redis.set("user_#{user_id}_online", "1") : redis.del("user_#{user_id}_online")

    ActionCable.server.broadcast "appearances_channel",
                                 user_id: user_id,
                                 online: online

  end

Error Log:

2019-08-06T13:26:37.867650+00:00 app[web.1]: [AnyCable sid=kn3TwGsIbbqA5AXX3UHWKp] RPC Disconnect: <AnyCable::DisconnectRequest: identifiers: "{\"__ltags__\":[\"ActionCable\",\"1\"],\"current_user\":\"1\"}", subscriptions: [], path: "/cable?id=1", headers: {"cookie"=>""}>

    2019-08-06T13:26:37.867978+00:00 app[web.1]: [AnyCable sid=kn3TwGsIbbqA5AXX3UHWKp] [ActionCable] [1] Finished "/cable?id=1" [AnyCable] for  at 2019-08-06 13:26:37 +0000 (Closed)

    2019-08-06T13:26:38.817847+00:00 app[web.1]: [AnyCable sid=RN4~mA8EX6PmMO2tR83Ei6] RPC Connect: <AnyCable::ConnectionRequest: path: "/cable?id=1", headers: {"cookie"=>""}>

    2019-08-06T13:26:38.818099+00:00 app[web.1]: [AnyCable sid=RN4~mA8EX6PmMO2tR83Ei6] Started "/cable?id=1" [AnyCable] for  at 2019-08-06 13:26:38 +0000

    2019-08-06T13:26:39.053969+00:00 app[web.1]: [AnyCable sid=RN4~mA8EX6PmMO2tR83Ei6] RPC Command: <AnyCable::CommandMessage: command: "subscribe", identifier: "{\"channel\":\"AppearanceChannel\"}", connection_identifiers: "{\"__ltags__\":[\"ActionCable\",\"1\"],\"current_user\":\"1\"}", data: "">

    2019-08-06T13:26:39.054228+00:00 app[web.1]: 1

    2019-08-06T13:26:39.054245+00:00 app[web.1]: true

    2019-08-06T13:26:39.131043+00:00 app[web.1]: [AnyCable 
    sid=RN4~mA8EX6PmMO2tR83Ei6] ERR max number of clients reached

    2019-08-06T13:26:39.131313+00:00 app[web.1]: E 2019-08-06T13:26:39.131Z context=node sid=RN4~mA8EX6PmMO2tR83Ei6 Subscribe error: Application error: ERR max number of clients reached

Using Redis on Gitlab CI for a Rails project

$
0
0

I am trying to set up Gitlab CI for my Rails project and for this I need Redis. I have added redis to the config file but I am getting an error

Redis::CannotConnectError:
  Error connecting to Redis on 127.0.0.1:6379 (Errno::ECONNREFUSED)

I am using JWTSession for token-based authentication and this relies on Redis. This is my setup.

.gitlab-ci.yml

image: ruby:2.5.1

services:
  - postgres:10.1
  - redis:latest

variables:
  BUNDLE_PATH: vendor/bundle
  DISABLE_SPRING: 1
  DB_HOST: postgres
  REDIS_URL: redis

before_script:
  - apt-get update -qq && apt-get install nodejs -y
  - bundle check || bundle install --jobs $(nproc)
  - cp config/database.yml.ci config/database.yml
  - bundle exec rails db:create RAILS_ENV=test
  - bundle exec rails db:schema:load RAILS_ENV=test

stages:
  - test

Tests:
  stage: test
  script:
    - bundle exec rspec spec/
  only:
    - merge_requests

config/initializers/redis.rb

require 'redis'
$redis = Redis.new(url: 'redis://redis:6379/0')

config/initializers/jwt_sessions.rb

JWTSessions.encryption_key = Rails.application.credentials.secret_jwt_encryption_key
JWTSessions.token_store = :redis, {
  redis_url: 'redis://localhost:6379',
  token_prefix: 'jwt_'
}

I can't seem to figure out how to get this to work. Any hint will be appreciated. Thank you!

UPDATE

I've just tried the connect block, but got this error:

connect:
  # Connect to PostgreSQL database as user postgres, without password
  image: redis
  script:
  - redis-cli -h redis PING

Redis::CannotConnectError:
        Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
--- Caused by: ---
      # Errno::EADDRNOTAVAIL:
      #   Cannot assign requested address - connect(2) for [::1]:6379

Cannot retrieve active ActionCable channels

$
0
0

I am building a notification relay into our Rails app and having difficulty returning the active channels. I have read other questions that state using Redis.new.pubsub("channels", "action_cable/*") will return the pub/sub channels but I always receive an empty array. I have been using redis-cli monitor to see that there is communication and everything in the rails app is working fine, I just cannot return the active pub/sub channels.

I am working in development mode and have updated my cable.yml file as below:

default: &default
adapter: redis
url: redis://localhost:6379/1

development:
  <<: *default

test:
  <<: *default

production:
  <<: *default

Redis is showing the connection is made and the channels are created:

Redis Monitor Output

When I try to query the pub/subs I get the empty array:

enter image description here

Rails 5.2.2.1, Ruby 2.6.2, Redis 4.0.9

Save metadata to jobs on Sidekiq

$
0
0

Is it possible to save metadata to jobs when using Sidekiq?

For example, I want to execute a validation as a background job so that when it finishes, any errors encountered would be saved as metadata inside the job.

If this is possible, will I still be able to recover this metadata after the job is finished or dead?

Thanks in advance.

Passenger forking with Nginx, action cable & redis

$
0
0

I have some problems on my real time notifier with Passenger (v6.0.0) and Action_cable on my staging server : when i do some action linked to action_cable, some new passenger process appear (forking...) and my memory consumption increase and do not go back down.

My routes config :

mount ActionCable.server => '/user_notifs'

My Nginx config :

location /user_notifs {
        passenger_app_group_name phusion_staging_action_cable;
        passenger_force_max_concurrent_requests_per_process 0;
}

My cable.yml

staging:
  adapter: redis
  url: redis://localhost:6379/1

On my htop, i have like 30 process like this :

Passenger AppPreloader: /xxxxxx/curent (forking...)

Can you help me ? Thank's

Can one rails application connect to multiple resque?

$
0
0

I using single rails application and I like to connect multiple resque to publish a message to multiple resque queues I am using resque gem but I not see any way to configure multiple resque can someone help me on this what configuration need to do on that case.

docker-compose can't find gemfile in bundle

$
0
0

I have been trying to get this app working for a while. It doesn't give me any errors to build it and it works on when my teammates follow the same steps. But, when I run

docker-compose run app bundle exec rake db:setup

I get

Starting joatu-db           ... done
Starting joatu-v2_bundler_1 ... done
/joatu/Gemfile not found

I have tried to bundle it using:

docker-compose run app bundle install

and I get

Starting joatu-v2_bundler_1 ... done 
Starting joatu-db           ... done

[!] There was an error parsing `Gemfile`: No such file or directory @ rb_sysopen - /joatu/Gemfile. Bundler cannot continue.

I am running Windows 10 version 1809

and I am running it on WSL: Ubuntu 18.04

Docker version: docker engine 18.09

docker compose version: 1.24.0 rc-1

The docker file is:

FROM ruby:2.4.5

MAINTAINER Robert JJ Smith <rsmithlal@gmail.com>

RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs libssl-dev

RUN mkdir /joatu
WORKDIR /joatu
COPY Gemfile /joatu/Gemfile
COPY Gemfile.lock /joatu/Gemfile.lock

# Use a persistent volume for the gems installed by the bundler
ENV BUNDLE_GEMFILE=/joatu/Gemfile \
  BUNDLE_JOBS=2 \
  BUNDLE_PATH=/bundler \
  GEM_PATH=/bundler \
  GEM_HOME=/bundler

#RUN gem install bundler #tried with it and without it same error
RUN bundle install

COPY . /joatu

docker-compose.yml

version: '2.1'

volumes:
  db-data:

services:
  app:
    container_name: joatu-app
    build: .
    command: bundle exec rails s -p 3000 -b '0.0.0.0'
    volumes:
      - $PWD:/joatu
    volumes_from:
      - bundler
    ports:
      - "3000:3000"
    depends_on:
      - db
    links:
      - db
    env_file:
      - './docker/.env.app.conf'
  db:
    container_name: joatu-db
    volumes:
      - db-data:/var/lib/postgresql/data
    image: mdillon/postgis:latest
    ports:
      - 5432:5432
    env_file:
      - './docker/.env.app.conf'
  bundler:
    image: busybox
    volumes:
      - /bundler

I have changed . to $PWD and back again to try and 'unmask it' and it still didn't work.

I have moved in different folders in windows to see if it was a permission issue. It still didn't work. And, I have deleted the image and rebuild it to make sure the old image wasn't messing with the changes.

Any help would be appreciated.

Original file app is located at https://github.com/joatuapp/joatu-v2

edit:

I ran the command

 docker-compose run app ls-la 

and its shown that the folder is empty. So, now I know it's a copying issue but I still don't know how to fix it.

Heroku + Sidekiq + Redis + Puma configuration issue

$
0
0

I have an application in Heroku that is running with sidekiq + Redis + puma.

Heroku configurations:

enter image description here

Redis Total plan Connection limit: 200 (using Heroku Redis)

puma.rb

threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
workers ENV.fetch("WEB_CONCURRENCY") { 2 }

sidekiq.rb

require 'sidekiq'

Sidekiq.configure_client do |config|
  config.redis = { size: 1 }
end

Sidekiq.configure_server do |config|
  config.redis = { size: 4 }
end

sidekiq.yml

production:
  :concurrency: 2

redis.rb

Redis.current = Redis.new(url: ENV[ENV['REDIS_PROVIDER'] || 'REDIS_URL'])

This above configuration is working fine, but I couldn't make use of maximum connections as per this above configuration currently that uses 34 connections

Also when I change this above configuration that is not working getting these errors:

could not obtain a connection from the pool within 5.000 seconds (waited 5.002 
seconds); all pooled connections were in use

I am following this calculator

I have also used the REDIS_URL instead of size,

Sidekiq.configure_client do |config|
  config.redis = { url: ENV['REDIS_URL'] || 'redis://localhost:6379/1' }
end

Sidekiq.configure_server do |config|
  config.redis = { url: ENV['REDIS_URL'] || 'redis://localhost:6379/1' }
end

that time also we are getting this error: could not obtain a connection from the pool within 5.000 seconds (waited 5.007 seconds); all pooled connections were in use

Full Backtrace:

could not obtain a connection from the pool within 5.000 seconds (waited 5.002 seconds); all pooled connections were in use..["/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/connection_adapters/abstract/connection_pool.rb:202:in `block in wait_poll'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/connection_adapters/abstract/connection_pool.rb:193:in `loop'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/connection_adapters/abstract/connection_pool.rb:193:in `wait_poll'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/connection_adapters/abstract/connection_pool.rb:154:in `internal_poll'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/connection_adapters/abstract/connection_pool.rb:278:in `internal_poll'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/connection_adapters/abstract/connection_pool.rb:148:in `block in poll'", "/app/vendor/ruby-2.4.4/lib/ruby/2.4.0/monitor.rb:214:in `mon_synchronize'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/connection_adapters/abstract/connection_pool.rb:158:in `synchronize'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/connection_adapters/abstract/connection_pool.rb:148:in `poll'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/connection_adapters/abstract/connection_pool.rb:747:in `acquire_connection'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/connection_adapters/abstract/connection_pool.rb:500:in `checkout'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/connection_adapters/abstract/connection_pool.rb:374:in `connection'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/connection_adapters/abstract/connection_pool.rb:931:in `retrieve_connection'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/connection_handling.rb:116:in `retrieve_connection'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/connection_handling.rb:88:in `connection'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/associations/join_dependency.rb:96:in `initialize'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/relation/query_methods.rb:1016:in `new'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/relation/query_methods.rb:1016:in `build_join_query'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/relation/query_methods.rb:1003:in `build_joins'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/relation/query_methods.rb:942:in `build_arel'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/relation/query_methods.rb:918:in `arel'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/relation.rb:678:in `exec_queries'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/relation.rb:546:in `load'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/relation.rb:255:in `records'", "/app/vendor/bundle/ruby/2.4.0/gems/activerecord-5.1.6/lib/active_record/relation/delegation.rb:39:in `each'", "/app/app/workers/pos/refresh_menu_worker.rb:19:in `perform'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/processor.rb:185:in `execute_job'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/processor.rb:167:in `block (2 levels) in process'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/middleware/chain.rb:128:in `block in invoke'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/middleware/chain.rb:133:in `invoke'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/processor.rb:166:in `block in process'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/processor.rb:137:in `block (6 levels) in dispatch'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/job_retry.rb:98:in `local'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/processor.rb:136:in `block (5 levels) in dispatch'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/rails.rb:42:in `block in call'", "/app/vendor/bundle/ruby/2.4.0/gems/activesupport-5.1.6/lib/active_support/execution_wrapper.rb:85:in `wrap'", "/app/vendor/bundle/ruby/2.4.0/gems/activesupport-5.1.6/lib/active_support/reloader.rb:68:in `block in wrap'", "/app/vendor/bundle/ruby/2.4.0/gems/activesupport-5.1.6/lib/active_support/execution_wrapper.rb:85:in `wrap'", "/app/vendor/bundle/ruby/2.4.0/gems/activesupport-5.1.6/lib/active_support/reloader.rb:67:in `wrap'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/rails.rb:41:in `call'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/processor.rb:132:in `block (4 levels) in dispatch'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/processor.rb:217:in `stats'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/processor.rb:127:in `block (3 levels) in dispatch'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/job_logger.rb:8:in `call'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/processor.rb:126:in `block (2 levels) in dispatch'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/job_retry.rb:73:in `global'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/processor.rb:125:in `block in dispatch'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/logging.rb:48:in `with_context'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/logging.rb:42:in `with_job_hash_context'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/processor.rb:124:in `dispatch'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/processor.rb:165:in `process'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/processor.rb:83:in `process_one'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/processor.rb:71:in `run'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/util.rb:16:in `watchdog'", "/app/vendor/bundle/ruby/2.4.0/gems/sidekiq-5.2.1/lib/sidekiq/util.rb:25:in `block in safe_thread'"]

Can anyone suggest the configuration to accommodate maximum connection?

Sidekiq jobs stuck in enqueue

$
0
0

Sidekiq has been working in development mode just perfectly. Now that I am trying to use it in production, all the jobs are just sitting in enqueue and aren't ever being run. Could anyone point me in the right direction as to how to solve this issue?


Using sidekiq/redistogo with heroku?

$
0
0

I followed the sideqik gem tutorial docs and heroku Redistogo addon docs

initializers/sidekiq.rb:

Sidekiq.configure_server do |config|
  config.redis = { url: 'redis://redistogo:xxx.redistogo.com:10076/' }
end

Sidekiq.configure_client do |config|
  config.redis = { url: 'redis://redistogo:xxx.redistogo.com:10076/' }
end

app/workers/hard_woker.rb:

class HardWorker
  include Sidekiq::Worker
  def perform(shop_domain, webhook)
   #performing stuffs
end

Webhook I am putting into background job (trying to at least):

class OrdersCreateJob < ActiveJob::Base
  def perform(shop_domain:, webhook:)
    shop = Shop.find_by(shopify_domain: shop_domain)
    shop.with_shopify_session do
      HardWorker.perform_async(shop_domain, webhook)
    end
  end
end

Procfile:

hardworker: bundle exec sidekiq -t 25

There are no errors in console.

Is something wrong here, did I miss something?

My queue:

irb(main):003:0> Sidekiq::Queue.all
=> [#<Sidekiq::Queue:0x000055b53a2d0920 @name="default", @rname="queue:default">]

I assume this means nothing is in the queue?

My goal is to take all my my CreateOrderWebhook code (which is almost 500 lines) into a background job to put less strain on the app and allow webhooks /prevent webhooks from being blocked

Using Resque-Scheduler with Docker-Compose - Error connecting to Redis on localhost:6379

$
0
0

I'm trying to run my rails app, resque and redis. Using foreman etc my setup works fine but when I try to run it in Docker using docker-compose I get:

app_1       | 21:03:13 resque.1 | Redis::CannotConnectError: Error connecting to Redis on redis://redis:6379 (SocketError)
app_1       | 21:03:13 resque.1 | Original Exception (Redis::CannotConnectError): Error connecting to Redis on redis://redis:6379 (SocketError)

My Dockerfile looks like:

FROM ruby:2.6.3-alpine

RUN apk update && apk add bash build-base libxml2-dev libxslt-dev postgresql postgresql-dev

VOLUME ["/code"]
WORKDIR /code

My Docker Compose YML looks like:

version: '3'
services:
  app:
    build: .
    command:
      - /bin/bash
      - -c
      - |
        bundle check || bundle install
        bundle exec rake db:migrate; rake db:seed
        bundle exec foreman start
    volumes:
      - ./:/code
    ports:
      - "5000:5000"
    depends_on:
      - redis
    environment:
      REDIS_URL: redis://redis/5
  redis:
    image: redis:5.0.5-alpine

My procfile:

web:       bundle exec puma -C config/puma.rb
resque:    bundle exec rake resque:work QUEUE=*
scheduler: bundle exec rake resque:scheduler

my config/initializers/resque.rb

redis_url = ENV["REDIS_URL"] || "redis://localhost:6379"

Redis.current = Redis.new(url: redis_url)
Resque.redis = Redis.current

and my lib/tasks/resque.rake

require "resque/tasks"
require "resque/scheduler/tasks"

task "resque:preload" => :environment
namespace :resque do
  task :setup do
    require "resque"

  end

  task setup_schedule: :setup do
    require "resque-scheduler"

    Resque.schedule = YAML.load_file(Rails.root.join("config","schedule.yml"))
  end

  task scheduler: :setup_schedule
end

UPDATE

The problem appears to be specifically with Resque-Scheduler. If I remove that from my procfile I can start my app fine with Docker-Compose and I can see perform resque jobs and see them running.

However the scheduler works fine when started with manually or with foreman (not using containers) so I'd like to be able to get it working here also.

I have just deployed the discourse application and getting error during Heroku db:migrate. May be redis is not able to connect

$
0
0

I deployed discourse applicattion in rails on heroku. deployed succesfully Getting error when i do heroku run rake db:migrate db:seed_fu alse getting an other error on heroku run rake db:create

    config/discourse_defaults.conf 

    # message bus redis server address
    message_bus_redis_host = redis://h:p27179a7ac96b0d215c36e5d5a4bda0c0565e1e6d96cdf043b9f5481d68dd1541@ec2-3-219-59-76.compute-1.amazonaws.com:27969


    # message bus redis server port
    message_bus_redis_port = 27969

    # message bus redis slave server address
    message_bus_redis_slave_host =

    # message bus redis slave server port
    message_bus_redis_slave_port = 27969


    config/intializer/001-redis.rb

    if Rails.env.development? && ENV['DISCOURSE_FLUSH_REDIS']
      puts "Flushing redis (development mode)"
      $redis.flushall
    end
application.rb

    # frozen_string_literal: true

    # note, we require 2.5.2 and up cause 2.5.1 had some mail bugs we no longer
    # monkey patch, so this avoids people booting with this problem version
    begin
      if !RUBY_VERSION.match?(/^2\.(([67])|(5\.[2-9]))/)
        STDERR.puts "Discourse requires Ruby 2.5.2 or up"
        exit 1
      end
    rescue
      # no String#match?
      STDERR.puts "Discourse requires Ruby 2.5.2 or up"
      exit 1
    end

    require File.expand_path('../boot', __FILE__)
    require 'active_record/railtie'
    require 'action_controller/railtie'
    require 'action_view/railtie'
    require 'action_mailer/railtie'
    require 'sprockets/railtie'

    # Plugin related stuff
    require_relative '../lib/discourse_event'
    require_relative '../lib/discourse_plugin'
    require_relative '../lib/discourse_plugin_registry'

    require_relative '../lib/plugin_gem'

    # Global config
    require_relative '../app/models/global_setting'
    GlobalSetting.configure!
    unless Rails.env.test? && ENV['LOAD_PLUGINS'] != "1"
      require_relative '../lib/custom_setting_providers'
    end
    GlobalSetting.load_defaults

    if ENV['SKIP_DB_AND_REDIS'] == '1'
      GlobalSetting.skip_db = true
      GlobalSetting.skip_redis = true
    end

    require 'pry-rails' if Rails.env.development?

    if defined?(Bundler)
      bundler_groups = [:default]

      if !Rails.env.production?
        bundler_groups = bundler_groups.concat(Rails.groups(
          assets: %w(development test profile)
        ))
      end

      Bundler.require(*bundler_groups)
    end

    module Discourse
      class Application < Rails::Application

        def config.database_configuration
          if Rails.env.production?
            GlobalSetting.database_config
          else
            super
          end
        end
        # Settings in config/environments/* take precedence over those specified here.
        # Application configuration should go into files in config/initializers
        # -- all .rb files in that directory are automatically loaded.

        # this pattern is somewhat odd but the reloader gets very
        # confused here if we load the deps without `lib` it thinks
        # discourse.rb is under the discourse folder incorrectly
        require_dependency 'lib/discourse'
        require_dependency 'lib/es6_module_transpiler/rails'
        require_dependency 'lib/js_locale_helper'

        # tiny file needed by site settings
        require_dependency 'lib/highlight_js/highlight_js'

        # mocha hates us, active_support/testing/mochaing.rb line 2 is requiring the wrong
        #  require, patched in source, on upgrade remove this
        if Rails.env.test? || Rails.env.development?
          require "mocha/version"
          require "mocha/deprecation"
          if Mocha::VERSION == "0.13.3" && Rails::VERSION::STRING == "3.2.12"
            Mocha::Deprecation.mode = :disabled
          end
        end

        # Disable so this is only run manually
        # we may want to change this later on
        # issue is image_optim crashes on missing dependencies
        config.assets.image_optim = false

        # Custom directories with classes and modules you want to be autoloadable.
        config.autoload_paths += Dir["#{config.root}/app/serializers"]
        config.autoload_paths += Dir["#{config.root}/lib/validators/"]
        config.autoload_paths += Dir["#{config.root}/app"]

        if Rails.env.development? && !Sidekiq.server?
          config.autoload_paths += Dir["#{config.root}/lib"]
        end

        # Only load the plugins named here, in the order given (default is alphabetical).
        # :all can be used as a placeholder for all plugins not explicitly named.
        # config.plugins = [ :exception_notification, :ssl_requirement, :all ]

        config.assets.paths += %W(#{config.root}/config/locales #{config.root}/public/javascripts)

        if Rails.env == "development" || Rails.env == "test"
          config.assets.paths << "#{config.root}/test/javascripts"
          config.assets.paths << "#{config.root}/test/stylesheets"
          config.assets.paths << "#{config.root}/node_modules"
        end

        # Allows us to skip minifincation on some files
        config.assets.skip_minification = []

        # explicitly precompile any images in plugins ( /assets/images ) path
        config.assets.precompile += [lambda do |filename, path|
          path =~ /assets\/images/ && !%w(.js .css).include?(File.extname(filename))
        end]

        config.assets.precompile += %w{
          vendor.js
          admin.js
          preload-store.js
          browser-update.js
          break_string.js
          ember_jquery.js
          pretty-text-bundle.js
          wizard-application.js
          wizard-vendor.js
          plugin.js
          plugin-third-party.js
          markdown-it-bundle.js
          service-worker.js
          google-tag-manager.js
          google-universal-analytics.js
          preload-application-data.js
          print-page.js
          omniauth-complete.js
          activate-account.js
          auto-redirect.js
          wizard-start.js
          onpopstate-handler.js
          embed-application.js
        }

        # Precompile all available locales
        unless GlobalSetting.try(:omit_base_locales)
          Dir.glob("#{config.root}/app/assets/javascripts/locales/*.js.erb").each do |file|
            config.assets.precompile << "locales/#{file.match(/([a-z_A-Z]+\.js)\.erb$/)[1]}"
          end
        end

        # out of the box sprockets 3 grabs loose files that are hanging in assets,
        # the exclusion list does not include hbs so you double compile all this stuff
        initializer :fix_sprockets_loose_file_searcher, after: :set_default_precompile do |app|
          app.config.assets.precompile.delete(Sprockets::Railtie::LOOSE_APP_ASSETS)
          start_path = ::Rails.root.join("app/assets").to_s
          exclude = ['.es6', '.hbs', '.js', '.css', '']
          app.config.assets.precompile << lambda do |logical_path, filename|
            filename.start_with?(start_path) &&
            !exclude.include?(File.extname(logical_path))
          end
        end

        # Set Time.zone default to the specified zone and make Active Record auto-convert to this zone.
        # Run "rake -D time" for a list of tasks for finding time zone names. Default is UTC.
        config.time_zone = 'UTC'

        # auto-load locales in plugins
        # NOTE: we load both client & server locales since some might be used by PrettyText
        config.i18n.load_path += Dir["#{Rails.root}/plugins/*/config/locales/*.yml"]

        # Configure the default encoding used in templates for Ruby 1.9.
        config.encoding = 'utf-8'
        config.assets.initialize_on_precompile = false

        # Configure sensitive parameters which will be filtered from the log file.
        config.filter_parameters += [
          :password,
          :pop3_polling_password,
          :api_key,
          :s3_secret_access_key,
          :twitter_consumer_secret,
          :facebook_app_secret,
          :github_client_secret,
          :second_factor_token,
        ]

        # Enable the asset pipeline
        config.assets.enabled = true

        # Version of your assets, change this if you want to expire all your assets
        config.assets.version = '1.2.4'

        # see: http://stackoverflow.com/questions/11894180/how-does-one-correctly-add-custom-sql-dml-in-migrations/11894420#11894420
        config.active_record.schema_format = :sql

        # per https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet
        config.pbkdf2_iterations = 64000
        config.pbkdf2_algorithm = "sha256"

        # rack lock is nothing but trouble, get rid of it
        # for some reason still seeing it in Rails 4
        config.middleware.delete Rack::Lock

        # wrong place in middleware stack AND request tracker handles it
        config.middleware.delete Rack::Runtime

        # ETags are pointless, we are dynamically compressing
        # so nginx strips etags, may revisit when mainline nginx
        # supports etags (post 1.7)
        config.middleware.delete Rack::ETag

        unless Rails.env.development?
          require 'middleware/enforce_hostname'
          config.middleware.insert_after Rack::MethodOverride, Middleware::EnforceHostname
        end

        require 'content_security_policy/middleware'
        config.middleware.swap ActionDispatch::ContentSecurityPolicy::Middleware, ContentSecurityPolicy::Middleware

        require 'middleware/discourse_public_exceptions'
        config.exceptions_app = Middleware::DiscoursePublicExceptions.new(Rails.public_path)

        # Our templates shouldn't start with 'discourse/templates'
        config.handlebars.templates_root = 'discourse/templates'
        config.handlebars.raw_template_namespace = "Discourse.RAW_TEMPLATES"

        require 'discourse_redis'
        require 'logster/redis_store'
        require 'freedom_patches/redis'
        # Use redis for our cache
        config.cache_store = DiscourseRedis.new_redis_store
        $redis = DiscourseRedis.new
        Logster.store = Logster::RedisStore.new(DiscourseRedis.new)

        # we configure rack cache on demand in an initializer
        # our setup does not use rack cache and instead defers to nginx
        config.action_dispatch.rack_cache = nil

        # ember stuff only used for asset precompliation, production variant plays up
        config.ember.variant = :development
        config.ember.ember_location = "#{Rails.root}/vendor/assets/javascripts/production/ember.js"
        config.ember.handlebars_location = "#{Rails.root}/vendor/assets/javascripts/handlebars.js"

        require 'auth'

        if GlobalSetting.relative_url_root.present?
          config.relative_url_root = GlobalSetting.relative_url_root
        end

        if Rails.env == "test"
          if ENV['LOAD_PLUGINS'] == "1"
            Discourse.activate_plugins!
          end
        else
          Discourse.activate_plugins!
        end

        require_dependency 'stylesheet/manager'
        require_dependency 'svg_sprite/svg_sprite'

        config.after_initialize do
          # require common dependencies that are often required by plugins
          # in the past observers would load them as side-effects
          # correct behavior is for plugins to require stuff they need,
          # however it would be a risky and breaking change not to require here
          require_dependency 'category'
          require_dependency 'post'
          require_dependency 'topic'
          require_dependency 'user'
          require_dependency 'post_action'
          require_dependency 'post_revision'
          require_dependency 'notification'
          require_dependency 'topic_user'
          require_dependency 'topic_view'
          require_dependency 'topic_list'
          require_dependency 'group'
          require_dependency 'user_field'
          require_dependency 'post_action_type'
          # Ensure that Discourse event triggers for web hooks are loaded
          require_dependency 'web_hook'

          # So open id logs somewhere sane
          OpenID::Util.logger = Rails.logger

          # Load plugins
          Discourse.plugins.each(&:notify_after_initialize)

          # we got to clear the pool in case plugins connect
          ActiveRecord::Base.connection_handler.clear_active_connections!

          # This nasty hack is required for not precompiling QUnit assets
          # in test mode. see: https://github.com/rails/sprockets-rails/issues/299#issuecomment-167701012
          ActiveSupport.on_load(:action_view) do
            default_checker = ActionView::Base.precompiled_asset_checker

            ActionView::Base.precompiled_asset_checker = -> logical_path do
              default_checker[logical_path] ||
                %w{qunit.js qunit.css test_helper.css test_helper.js wizard/test/test_helper.js}.include?(logical_path)
            end
          end
        end

        if ENV['RBTRACE'] == "1"
          require 'rbtrace'
        end

        config.generators do |g|
          g.test_framework :rspec, fixture: false
        end

        # we have a monkey_patch we need to require early... prior to connection
        # init
        require 'freedom_patches/reaper'

      end
    end


    app/model/global_settingg.rb

    # frozen_string_literal: true

    class GlobalSetting

      def self.register(key, default)
        define_singleton_method(key) do
          provider.lookup(key, default)
        end
      end

      VALID_SECRET_KEY ||= /^[0-9a-f]{128}$/
      # this is named SECRET_TOKEN as opposed to SECRET_KEY_BASE
      # for legacy reasons
      REDIS_SECRET_KEY ||= 'SECRET_TOKEN'

      REDIS_VALIDATE_SECONDS ||= 30

      # In Rails secret_key_base is used to encrypt the cookie store
      # the cookie store contains session data
      # Discourse also uses this secret key to digest user auth tokens
      # This method will
      # - use existing token if already set in ENV or discourse.conf
      # - generate a token on the fly if needed and cache in redis
      # - enforce rules about token format falling back to redis if needed
      def self.safe_secret_key_base

        if @safe_secret_key_base && @token_in_redis && (@token_last_validated + REDIS_VALIDATE_SECONDS) < Time.now
          @token_last_validated = Time.now
          token = $redis.without_namespace.get(REDIS_SECRET_KEY)
          if token.nil?
            $redis.without_namespace.set(REDIS_SECRET_KEY, @safe_secret_key_base)
          end
        end

        @safe_secret_key_base ||= begin
          token = secret_key_base
          if token.blank? || token !~ VALID_SECRET_KEY

            @token_in_redis = true
            @token_last_validated = Time.now

            token = $redis.without_namespace.get(REDIS_SECRET_KEY)
            unless token && token =~ VALID_SECRET_KEY
              token = SecureRandom.hex(64)
              $redis.without_namespace.set(REDIS_SECRET_KEY, token)
            end
          end
          if !secret_key_base.blank? && token != secret_key_base
            STDERR.puts "WARNING: DISCOURSE_SECRET_KEY_BASE is invalid, it was re-generated"
          end
          token
        end
      rescue Redis::CommandError => e
        @safe_secret_key_base = SecureRandom.hex(64) if e.message =~ /READONLY/
      end

      def self.load_defaults
        default_provider = FileProvider.from(File.expand_path('../../../config/discourse_defaults.conf', __FILE__))
        default_provider.keys.concat(@provider.keys).uniq.each do |key|
          default = default_provider.lookup(key, nil)

          instance_variable_set("@#{key}_cache", nil)

          define_singleton_method(key) do
            val = instance_variable_get("@#{key}_cache")
            unless val.nil?
              val == :missing ? nil : val
            else
              val = provider.lookup(key, default)
              if val.nil?
                val = :missing
              end
              instance_variable_set("@#{key}_cache", val)
              val == :missing ? nil : val
            end
          end
        end
      end

      def self.skip_db=(v)
        @skip_db = v
      end

      def self.skip_db?
        @skip_db
      end

      def self.skip_redis=(v)
        @skip_redis = v
      end

      def self.skip_redis?
        @skip_redis
      end

      def self.use_s3?
        (@use_s3 ||=
          begin
            s3_bucket &&
            s3_region && (
              s3_use_iam_profile || (s3_access_key_id && s3_secret_access_key)
            ) ? :true : :false
          end) == :true
      end

      def self.s3_bucket_name
        @s3_bucket_name ||= s3_bucket.downcase.split("/")[0]
      end

      # for testing
      def self.reset_s3_cache!
        @use_s3 = nil
      end

      def self.database_config
        hash = { "adapter" => "postgresql" }

        %w{
          pool
          connect_timeout
          timeout
          socket
          host
          backup_host
          port
          backup_port
          username
          password
          replica_host
          replica_port
        }.each do |s|
          if val = self.public_send("db_#{s}")
            hash[s] = val
          end
        end

        hash["adapter"] = "postgresql_fallback" if hash["replica_host"]

        hostnames = [ hostname ]
        hostnames << backup_hostname if backup_hostname.present?

        hostnames << URI.parse(cdn_url).host if cdn_url.present?

        hash["host_names"] = hostnames
        hash["database"] = db_name

        hash["prepared_statements"] = !!self.db_prepared_statements

        { "production" => hash }
      end

      # For testing purposes
      def self.reset_redis_config!
        @config = nil
        @message_bus_config = nil
      end

      def self.redis_config
        @config ||=
          begin
            c = {}
            c[:host] = redis_host if redis_host
            c[:port] = redis_port if redis_port

            if redis_slave_host && redis_slave_port
              c[:slave_host] = redis_slave_host
              c[:slave_port] = redis_slave_port
              c[:connector] = DiscourseRedis::Connector
            end

            c[:password] = redis_password if redis_password.present?
            c[:db] = redis_db if redis_db != 0
            c[:db] = 1 if Rails.env == "test"
            c[:id] = nil if redis_skip_client_commands

            c.freeze
          end
      end

      def self.message_bus_redis_config
        return redis_config unless message_bus_redis_enabled
        @message_bus_config ||=
          begin
            c = {}
            c[:host] = message_bus_redis_host if message_bus_redis_host
            c[:port] = message_bus_redis_port if message_bus_redis_port

            if message_bus_redis_slave_host && message_bus_redis_slave_port
              c[:slave_host] = message_bus_redis_slave_host
              c[:slave_port] = message_bus_redis_slave_port
              c[:connector] = DiscourseRedis::Connector
            end

            c[:password] = message_bus_redis_password if message_bus_redis_password.present?
            c[:db] = message_bus_redis_db if message_bus_redis_db != 0
            c[:db] = 1 if Rails.env == "test"
            c[:id] = nil if message_bus_redis_skip_client_commands

            c.freeze
          end
      end

      def self.add_default(name, default)
        unless self.respond_to? name
          define_singleton_method(name) do
            default
          end
        end
      end

      class BaseProvider
        def self.coerce(setting)
          return setting == "true" if setting == "true" || setting == "false"
          return $1.to_i if setting.to_s.strip =~ /^([0-9]+)$/
          setting
        end

        def resolve(current, default)
          BaseProvider.coerce(
            if current.present?
              current
            else
              default.present? ? default : nil
            end
          )
        end
      end

      class FileProvider < BaseProvider
        attr_reader :data
        def self.from(file)
          if File.exists?(file)
            parse(file)
          end
        end

        def initialize(file)
          @file = file
          @data = {}
        end

        def read
          ERB.new(File.read(@file)).result().split("\n").each do |line|
            if line =~ /^\s*([a-z_]+[a-z0-9_]*)\s*=\s*(\"([^\"]*)\"|\'([^\']*)\'|[^#]*)/
              @data[$1.strip.to_sym] = ($4 || $3 || $2).strip
            end
          end
        end

        def lookup(key, default)
          var = @data[key]
          resolve(var, var.nil? ? default : "")
        end

        def keys
          @data.keys
        end

        def self.parse(file)
          provider = self.new(file)
          provider.read
          provider
        end

        private_class_method :parse
      end

      class EnvProvider < BaseProvider
        def lookup(key, default)
          var = ENV["DISCOURSE_" + key.to_s.upcase]
          resolve(var , var.nil? ? default : nil)
        end

        def keys
          ENV.keys.select { |k| k =~ /^DISCOURSE_/ }.map { |k| k[10..-1].downcase.to_sym }
        end
      end

      class BlankProvider < BaseProvider
        def lookup(key, default)

          if key == :redis_port
            return ENV["DISCOURSE_REDIS_PORT"] if ENV["DISCOURSE_REDIS_PORT"]
          end
          default
        end

        def keys
          []
        end
      end

      class << self
        attr_accessor :provider
      end

      def self.configure!
        if Rails.env == "test"
          @provider = BlankProvider.new
        else
          @provider =
            FileProvider.from(File.expand_path('../../../config/discourse.conf', __FILE__)) ||
            EnvProvider.new
        end
      end

    end
error 
Failed to report error: Name or service not known 2 Name or service not known subscribe failed, reconnecting in 1 second. Call stack ["/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/connection/hiredis.rb:19:in `connect'",
 "/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/connection/hiredis.rb:19:in `connect'",
 "/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/client.rb:334:in `establish_connection'",
 "/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/client.rb:99:in `block in connect'",
 "/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/client.rb:291:in `with_reconnect'",
 "/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/client.rb:98:in `connect'",
 "/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/client.rb:274:in `with_socket_timeout'",
 "/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/client.rb:131:in `call_loop'",
 "/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/subscribe.rb:43:in `subscription'",
 "/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/subscribe.rb:12:in `subscribe'",
 "/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis.rb:2824:in `_subscription'",
 "/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis.rb:2192:in `block in subscribe'",
 "/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis.rb:45:in `block in synchronize'",
 "/app/vendor/ruby-2.5.5/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'",
 "/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis.rb:45:in `synchronize'",
 "/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis.rb:2191:in `subscribe'",
 "/app/vendor/bundle/ruby/2.5.0/gems/message_bus-2.2.2/lib/message_bus/backends/redis.rb:287:in `global_subscribe'",
 "/app/vendor/bundle/ruby/2.5.0/gems/message_bus-2.2.2/lib/message_bus.rb:721:in `global_subscribe_thread'",
 "/app/vendor/bundle/ruby/2.5.0/gems/message_bus-2.2.2/lib/message_bus.rb:669:in `block in new_subscriber_thread'"]
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
rake aborted!
Name or service not known

How, or best way to run while loop to avoid API limitations

$
0
0

I am using an API request and this API has a limit of 2 calls a second and 250 records per request. That's the gist.

I have created this background job that also has the option of a second background job. This may be overkill.

Flow:

  1. Order webhooks on creates from shopify
  2. Cron job once per day for that days orders in case webhooks fail.

Goal for request:

If there are >= 250 records/orders in the first API request, to then create a second background job in a new worker to fetch page 2 in about 3 minutes, and if page 2 has >= 250, to then create a new background job in the same worker has page 2 (after 2 completes) 3 minutes after page 2 jobs started fetch page 3, and so on.. I use n for the page and add 1 to n if the 250 statement is true.

Cron Job:

 shops = Shop.all
  shops.map do |shop|
    if shop.company.present?
      ShopifyOrderUpdatesWorker.perform_later(shop)
    end 
  end

Background job 1: (for first API call)

      def perform(shop)
        n = 1
        orders = ShopifyAPI::Order.find(:all, params: {created_at_min: 1.day.ago.beginning_of_day.iso8601}, limit: 250, page: n )
        while (orders.count >= 250) || n == 1
          unless n =< 1 
            while n > 1 && orders.count >= 250
              orders = ShopifyAPI::Order.find(:all, params: {created_at_min: 1.day.ago.beginning_of_day.iso8601 }, limit: 250, page: n)
              #while orders.count >= 250 || n == 2
                t = 3
ShopifyOrderUpdatesLimitWorker.delay_for(t.minutes).perform_later(orders)
                n += 1 #add page to API call request
                t += +3 #add 3 minutes for each loop to buffer the api call queue to avoid api limits to be safe
              #end
            end
          end
          if n == 1
            orders.map do |order|
              #code here
            end
          end
          n += 1
        end
      end

Background job 2: (for any API call after the first)

  def perform(orders)
    orders.map do |order|
      #code here
    end
  end

This way, all shops can update "quickly" without being in queue behind other shops. Shops that have a lot of orders will wait the same time they would in either case of doing all of this in one action or in 2.

Is this overkill? Done right for code

In reality, it is probably very rare a webhook will fail so the chances of the second background job being called is slim.

Any possible improvements or suggestions for the code?

This may not be the right place to ask this question, but if anyone has experience with shopify or similar api situations, what are you doing?

Redis pool shared between Sidekiq and Rails application

$
0
0

Currently I use this configuration:

# concurrency = 25

$redis = ConnectionPool.new(size: 30) { Redis.new(url: ENV['REDIS_URL']) }
Sidekiq.configure_server do |config|
  config.redis = $redis
end
Sidekiq.configure_client do |config|
  config.redis = $redis
end

Note that the same $redis pool is also used in my own application code, for example:

$redis.with do |conn|
  # ...
end

Now I wonder if that is correct, or if that may create a bottleneck.

Is it better to use a separate connection pool for Sidekiq and a different one for the application?

E.g.

# use this pool only in my own code
$redis = ConnectionPool.new(size: 30) { Redis.new(url: ENV['REDIS_URL']) }

# and I define a separate connection pool only for Sidekiq
Sidekiq.configure_server do |config|
  config.redis = ConnectionPool.new(size: 30) { Redis.new(url: ENV['REDIS_URL']) }
end
Sidekiq.configure_client do |config|
  config.redis = ConnectionPool.new(size: 30) { Redis.new(url: ENV['REDIS_URL']) }
end
Viewing all 884 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>