Quantcast
Channel: Active questions tagged redis+ruby-on-rails - Stack Overflow
Viewing all 884 articles
Browse latest View live

In Rails, is there a way to "short-circuit" out of a rescue_from?

$
0
0

I am trying to make a nice user-friendly failsafe recovery for Redis in Rails. I'm using the built in redis_cache_store and have reconnect_attempts set, but I would like to send the user to a custom error page in the event that Redis remains down, rather than have them be stuck on a nonresponsive page with no idea what is going on.

What I tried was to monkey-patch the failsafe method in RedisCacheStore by re-raising a new type of error, RedisUnavailableError, which I then have my application_controller catch and redirect_to a static 500 page.

Where this breaks down is that I would like my application to just stop after the redirect. In the byebug trace below, you can see that before the second-last block I've included, the redirect line is reached.

But it does not stop; I've included just the first of a long sequence of further methods invoked that eventually results in another attempt to write to Redis, which is still down, which then re-raises the same sequence of exceptions, which Rails then will not catch a second time (indeed even if it did, that would be an infinite loop).

So my question is: is there any way to get the rescue_from block to just stop and not continue triggering anything else after a certain line is reached?

Alternately, is there any way to on the fly "disable" Redis or change the cache store/settings to some nullary value in the rescue_from block so that anything further triggered won't try to talk to Redis?

    [40, 49] in (DELETED)/redis_failsafe.rb
       40:   rescue ::Redis::BaseConnectionError => e
       41:     byebug
       42:     handle_exception(exception: e, method: method, returning: returning)
       43:     returning
       44:     byebug
    => 45:     raise RedisUnavailableError
       46:     # Re-raise the exception when reconnect attempt fails, so our application controller can handle it.
       47:   end
       48: end
       49: 
    (byebug) c

    [INFO][15:50:51] [3cf7] [GET] [(DELETED):3000] [/suppliers] 
    ∙ Redirecting to 500 page due to: RedisUnavailableError

    [30, 39] in /(DELETED)/application_controller.rb
       30:   authorize_resource class: false
       31: 
       32:   rescue_from RedisUnavailableError do |exception|
       33:     byebug
       34:     Rails.logger.info "Redirecting to 500 page due to: #{exception.message}"
    => 35:     redirect_to '/500.html'
       36:     byebug
       37:   end
       38: 
       39:   rescue_from ActiveRecord::RecordNotFound do
    (byebug) n
    ∙ Redirected to http://(DELETED)/500.html
    Return value is: nil

    [32, 41] in /(DELETED)/application_controller.rb
       32:   rescue_from RedisUnavailableError do |exception|
       33:     byebug
       34:     Rails.logger.info "Redirecting to 500 page due to: #{exception.message}"
       35:     redirect_to '/500.html'
       36:     byebug
    => 37:   end
       38: 
       39:   rescue_from ActiveRecord::RecordNotFound do
       40:     render status: :not_found, plain: 'Not found'
       41:   end
    (byebug) n

    [51, 60] in /(DELETED)/rescuable.rb
       51:   def rescue_with_handler(exception, object: self, visited_exceptions: [])
       52:     visited_exceptions << exception
       53: 
       54:     if handler = handler_for_rescue(exception, object: object)
       55:       handler.call exception
    => 56:       exception
       57:     elsif exception
       58:       if visited_exceptions.include?(exception.cause)
       59:         nil
       60:       else
    (byebug) 

Flaky tests with redis

$
0
0

I have a unit test for caching an API key in redis (in a rails app).

redis is set to a global variable ($redis) with the connection_pool gem. So I do $redis.with do |redis| to actually get the redis connection.

The issue is, my test is incredibly flaky. It has two assertions: one to test getting the expected value out and another to test that value is cached (for 10 minutes).

jwt = JWT.encode payload, private_key, "RS256"
assert_equal jwt, Foo.app_token
travel 5.minutes
assert_equal jwt, Foo.app_token

I can't work out why the test is so flaky. It fails maybe every 3-5 runs. Any help would be greatly appreciated. I've never really used redis inside a connection pool so may well be doing something basic wrong.

The implementation of the method:

def app_token
  find_app_token || create_app_token
end

private

def find_app_token
  $redis.with { |redis| redis.get APP_KEY }
end

def create_app_token
  payload = # some payload
  JWT.encode(payload, private_key, "RS256").tap do |jwt|
     $redis.with do |redis|
       redis.set APP_KEY, jwt
       redis.expireat APP_KEY, payload[:exp]
     end
  end
end

Sidekiq/Redis queuing a job that doesn't exist

$
0
0

I built a simple test job for Sidekiq and added it to my schedule.yml file for Sidekiq Cron.

Here's my test job:

module Slack
  class TestJob < ApplicationJob
    queue_as :default

    def perform(*args)
      begin
        SLACK_NOTIFIER.post(attachments: {"pretext": "test", "text": "hello"})
      rescue Exception => error
        puts error
      end
    end
  end
end

The SLACK_NOTIFIER here is a simple API client for Slack that I initialize on startup.

And in my schedule.yml:

test_job:
  cron: "* * * * *"
  class: "Slack::TestJob"
  queue: default
  description: "Test"

So I wanted to have it run every minute, and it worked exactly as I expected.

However, I've now deleted the job file and removed the job from schedule.yml, and it still tries to run the job every minute. I've gone into my sidekiq dashboard, and I see a bunch of retries for that job. No matter how many times I kill them all, they just keep coming.

I've tried shutting down both the redis server and sidekiq several times. I've tried turning off my computer (after killing the servers, of course). It still keeps scheduling these jobs and it's interrupting my other jobs because it raises the following exception:

NameError: uninitialized constant Slack::TestJob

I've done a project-wide search for "TestJob", but get no results.

I only had the redis server open with this job for roughly 10 minutes...

Is there maybe something lingering in the redis database? I've looked into the redis-cli documentation, but I don't think any of it helps me.

ERR Client sent AUTH, but no password is set error even if I set a password

$
0
0

I want to open sidekiq(bundle exec sidekiq) on my localhost, however I am getting ERR Client sent AUTH, but no password is set error even if I set a password on my redis.conf file.

I already set a password on my redis.conf file. The password on redis.conf file is already on my secret.yml file like:

sidekiq_redis_url: redis://localhost:6379
sidekiq_redis_pwd: redispwd

Here is my sidekiq.rb

require 'sidekiq/web'
Sidekiq.configure_server do |config|
  config.redis = { url: "#{Rails.application.secrets[:sidekiq_redis_url]}", password: "#{Rails.application.secrets[:sidekiq_redis_pwd]}" }
end
Sidekiq.configure_client do |config|
  config.redis = { url: "#{Rails.application.secrets[:sidekiq_redis_url]}", password: "#{Rails.application.secrets[:sidekiq_redis_pwd]}" }
end
Sidekiq::Web.set :sessions, false

Redis server version is 5.0.5 and sidekiq version is 5.2.7. Thanks for your help.

Sidekiq, Puma and Redis config

$
0
0

I'm running a Rails 5.1 application on google AppEngine.

Below is my config

Procfile

worker: bundle exec sidekiq -C config/sidekiq.yaml
pubsub: bundle exec rake run_analytics_queue_processor

sidekiq.yaml

:concurrency: 2
:timeout: 30
:queues:
  - default

config/initializers/sidekiq.rb

# frozen_string_literal: true

require 'sidekiq/web'

url = CREDENTIALS[:redis_url]

Sidekiq::Web.use(Rack::Auth::Basic) do |user, password|
  [user, password] == [CREDENTIALS[:sidekiq_username], CREDENTIALS[:sidekiq_password]]
end

Sidekiq.configure_server do |config|
  config.redis = { url: url, id: nil }
end

Sidekiq.configure_client do |config|
  config.redis = { url: url, id: nil, size: 12 }
end

puma.rb

# frozen_string_literal: true

# Puma can serve each request in a thread from an internal thread pool.
# The `threads` method setting takes two numbers: a minimum and maximum.
# Any libraries that use thread pools should be configured to match
# the maximum value specified for Puma. Default is set to 5 threads for minimum
# and maximum; this matches the default thread size of Active Record.
#
workers 2
threads 6, 6

preload_app!

rackup      DefaultRackup

# Specifies the `port` that Puma will listen on to receive requests; default is 3000.
#
port        ENV.fetch('PORT') { 3000 }

# Specifies the `environment` that Puma will run in.
#
environment ENV.fetch('RAILS_ENV') { 'local' }

on_worker_boot do
  # Worker specific setup for Rails 4.1+
  # See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot
  begin
    ActiveRecord::Base.connection.disconnect!
  rescue StandardError
    ActiveRecord::ConnectionNotEstablished
  end
  ActiveRecord::Base.establish_connection
end

# Specifies the number of `workers` to boot in clustered mode.
# Workers are forked webserver processes. If using threads and workers together
# the concurrency of the application would be max `threads` * `workers`.
# Workers do not work on JRuby or Windows (both of which do not support
# processes).
#
# workers ENV.fetch("WEB_CONCURRENCY") { 2 }

# Use the `preload_app!` method when specifying a `workers` number.
# This directive tells Puma to first boot the application and load code
# before forking the application. This takes advantage of Copy On Write
# process behavior so workers use less memory.
#
# preload_app!

# Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart

In my procfile the bundle exec rake run_analytics_queue_processor is calling a rake job that endlessly runs and takes in messages from our PubSub service and queues them to a sidekiq/ActiveJob background job

I have been getting multiple Timeout Timeout::Error: Waited 1 sec issues when calling perform_later with sidekiq jobs. Looking up the issue it seems like the connection pooling for redis was incorrect. I believe it is threads * concurrency which should be 12 in my case, which is what I am setting the redis pool size to in my sidekiq.yaml

Does anything in my config look glaringly wrong?

Sidekiq kills in between without any reason server RAM is approx 8GB

$
0
0

I am working on the Ruby on rails application, my workers are running on the sidekiq-6.0.1. generally I'm using this to send the push notifications on android and ios devices etc.

I'm running sidekiq in the background using the following command:

bundle exec sidekiq -L log/sidekiq.log -C config/sidekiq.yml -e development &

Sidekiq runs in background successfully but after a couple of hours or sometimes in a couple of minutes kills silently.

There is not any memory issue, My server RAM is 8 GB.

Please help me to get a best way to deal with this case.

Maily Herald: NoMethodError in MailyHerald::Webui::Dashboard#index

$
0
0

I want to use the Maily Herald as an Advanced email processing solution for Ruby on Rails applications. I have cloned the a copy of the application and have setup up. The issue though with the application is that the maintenance for the application has been poor recently as the last update was made in April 6, 2018.

However, I have been able to setup it up with Ruby on Rails 5.2.3, although it has thrown a lot of errors which I have solved a lot of them from help online. But I am currently faced with a Redis issue which I can't tell how to solve it.

I am trying to access the web interface of Maily Herald, but it throws the error

NoMethodError in MailyHerald::Webui::Dashboard#index .

I have tried to solve it but no solution yet. The file is in HAML not ERB. I have attached a screenshot of the error as well. Any form of help will be highly appreciated.redis issue

How to run sidekiq in background - what is the best approch with rails app running on Nginx

$
0
0

I'm using Sidekiq 6.0.1. I'm trying to run in the background, here is the command I'm using:

bundle exec sidekiq -d -L log/sidekiq.log -C config/sidekiq.yml -e development

This is showing

ERROR: Daemonization mode was removed in Sidekiq 6.0, please use a proper process supervisor to start and manage your services

ERROR: Logfile redirection was removed in Sidekiq 6.0, Sidekiq will only log to STDOUT

My application is of Ruby on Rails and deployed using the Nginx web server.

What would be the best approach to run the sidekiq in the background so my rails application can run the workers?


Can I use my existing Redis for a custom rails cache?

$
0
0

The documentation for the ActiveSupport::Cache::RedisCacheStore states:

Take care to use a dedicated Redis cache rather than pointing this at your existing Redis server. It won't cope well with mixed usage patterns and it won't expire cache entries by default.

Is this advice still true in general, especially when talking about custom data caches, not page (fragment) caches?

Or, more specifically, If I'm building a custom cache for specific "costly" backend calls to a slow 3rd-party API and I set an explicit expires_in value on my cache (or all my cached values), does this advice apply to me at all?

Sidekiq crashes every few hours in docker

$
0
0

I am facing an issue in which sidekiq crashes every few hours in docker. Investigating the logs reveals a "TimeoutError" when sidekiq tries to connect to redis and restarting the containers does not work. The only way I can get the get it to work again is by restarting the docker daemon.

After lots of investigation, I believe the root cause has to do with logging problems in docker not keeping up with redis logs. The diagnosis I saw for this comes from this post. Their solution was to downgrade docker, but it seems docker is still not fixed up to version 19.03.3, so the solution of reverting to an old docker version is not feasible.

Any possible workaround other than downgrading is possible? Did anyone face this issue before? Any suggestions for a solution?

Can Rspec wait for an answer from a ApplicationJob?

$
0
0

I'm learning about RSpec, and I'm calling a method inside a model, from spec file, but the method have a background job, my questions are; How can I set spec to waiting for a response from the backgroound job? Is it possible?

Thanks for your help!!!!

My Rspec File: user_spec.rb

require 'rails_helper'
RSpec.describe User, type: :model do
  let(:user) { user.create(name: "Test User Name",last_name: "") }
  describe "existing user instance" do
    it "fetch last name" do
      user.update_last_name!
      expect(user.last_name).to eq("Test Last Name")
    end
  end
end

My Model File: user.rb

class User < ApplicationRecord

  def update_last_name!
    UpdateLastNameJob.perform_later(self.id,"Test Last Name")
  end

end

My Background Job Working in reddis: update_last_name_job.rb

class UpdateLastNameJob < ApplicationJob
  @queue = :default
  def perform(user_id,last_name)
    @user = User.find(user_id)
    @user.last_name = last_name
    @user.save
  end
end

How to connect to a clustered Azure Redis cache in redis-rb?

$
0
0

I am attempting to use the Ruby Redis client redis-rb to connect to an Azure Redis Cache configured for clustering.

What I've tried:

I have used this related question to successfully connect to a non-clustered Azure Redis Cache. I can also use this to connect to a clustered Azure Redis Cache, which correctly reports MOVED when I attempt to get or set keys:

Redis::CommandError (MOVED 1234 address_here:port_here)

I have seen this documentation for creating the connection with cluster:

Nodes can be passed to the client as an array of connection URLs.

nodes = (7000..7005).map { |port| "redis://127.0.0.1:#{port}" }
redis = Redis.new(cluster: nodes)

You can also specify the options as a Hash. The options are the same as for a single server connection.

(7000..7005).map { |port| { host: '127.0.0.1', port: port } }

I have used these examples to build an example against the single available DNS endpoint that fails with the following error:

irb(main):024:0> client = Redis.new(cluster: ["redis://my-redis-cluster.redis.cache.windows.net:6379"])
...
Redis::CannotConnectError (Redis client could not connect to any cluster nodes)

I've tried each variant of this listed in the documentation, with the same results.

Problem:

Azure Cache for Redis exposes the clustered nodes on a single DNS endpoint, while this redis-rb cluster parameter seems to expect a collection of known node endpoints.

Is it possible to use this library to connect to a clustered Azure Redis Cache? And if so, what would a reproducible example of this look like? If it is not possible with redis-rb, but is possible with another Ruby Redis client, I would also be interested in that solution.

What's the most efficient way to serve averages out of a large data set? [closed]

$
0
0

I inherited a codebase that tracks prices for about 20 million products. There's an average of 5 data points per item, per day. Logfiles are ingested nightly and the values dumped into Redis, where they're stored in hashes that represent a day's worth of data for that item. A Rails api sits on top of that and serves averages (calculated on the fly for every request) and misc historical data for the different price types to our various other services.

This works fine, but it was built when our inventory was about 1/10th the size, and our ElastiCache bills are outrageous (the cluster is about 100gb right now and we have to run 2 replicas). Plus it just feels gross.

It feels like this is probably better done with SQL, but I'm not quite sure how to model it. The services consuming this data don't necessarily need access to every recorded data point, but they do need things like "highest/lowest value in the last n months and the time it was recorded" that rule out just pre-calculating and only storing the averages.

The schema that first comes to mind is a product table with associated records that each represent a day, with columns for the various data points - but a year of data would be about 7.3b rows, so that feels like the wrong approach.

Am I heading in the right direction with this, or is the correct approach to stick with a kv store but just massage this data into a more manageable form?

Encrypt Sidekiq's connection to Redis

$
0
0

We currently have Sidekiq setup with Azure Redis Cache and would like to encrypt the connection between them. After a little googling I came across a recently merged pull request that adds native encryption to Redis but this as of yet has not been released. I have seen people suggest Stunnel but I was wondering if there were any alternatives to this approach?

Rails Sidekiq on Heroku

$
0
0

I can see the heroku log Enqueued ContactJob (Job ID: 51992323-a2fe-425f-aec7-1f960eaf9e7d) to Sidekiq(default) with arguments: #<GlobalID:0x000055c765545178 @uri=...

Here is my Procfile

web: bundle exec puma -C config/puma.rb
worker: bundle exec sidekiq -q default -q mailers

But it doesn't deliver mail in the production(heroku) but works well in the local.. app resources

heroku ps: heroku ps


not able to host machine's redis to docker rails app

$
0
0

I'm trying to connect my rails app which is in the docker container and trying to connect that to host machine's Redis server which is running on port 6379. I'm getting

dockefile EXPOSE 3000 EXPOSE 6379

sudo docker run -it -e RAILS_ENV=development -p 3000:3000 -p 6379:6380 <containerid>

gives error

Redis::ConnectionError: Connection lost (ECONNRESET)
when redis is running on 6380.

and

when I try to run Redis on 6379 I get the following error

with

sudo docker run -it -e RAILS_ENV=development -p 3000:3000 -p 6379:6379

docker: Error response from daemon: driver failed programming external connectivity on endpoint vigorous_turing (2b5c8e2b4f5df5f1bfcccfdfc87fd5ea78c5c2643de4e00774e7dec67acbd8c4): Error starting userland proxy: listen tcp 0.0.0.0:6379: bind: address already in use.

Am I using connection pool correctly?

$
0
0

I have a rails ActiveRecord Model which is stored in MongoDB. Upon create/update/delete I have to update Redis data based on the model data. It's not a cache data we need it to be present.

Current code is something like this

Class Person
  include Mongoid::Document

  after_save do
    REDIS_POOL.with { |conn| do_something(conn, instance) }
  end
end

The problem here is when we do update multiple records in a loop I believe we would be creating unnecessary Redis connection which could have been done with a single connection.

Is there any way to avoid it?

ROR Backend - Android App Error ActionCableException: java.net.SocketException: Socket closed

$
0
0

RubyOnRails APIs are using ActionCable to connect to the socket for live tracking.

There are two Android apps Driver and Customer. Drivers are broadcasting their locations and customers are subscribing.

Sometimes everything looks fine on the apps and backend but the maximum time apps start throwing the error while connecting the action cable:

ActionCableException: java.net.SocketException: Socket closed

RubyOnRails Application is deployed over the Nginx and passenger.

Please help me to know the reason and some solution to overcome from here.

Thanks.

Ruby on Rails - Action Cable Client is Connected first time then always start disconnect

$
0
0

ActionCable is connected the first time for the client but after couple of seconds, this starts to disconnect.

ActionCable APIs are implemented in RubyOnRails

  • When trying to subscribe channel, the client goes to disconnect

RubyOnRails Application is deployed at the Nginx and Passenger

Please help

RubyOnRails - ActionCableClient -TransmitError.notConnected For iOS Swift

$
0
0

RubyOnRails APIs are using ActionCable to connect to the socket for live tracking.

There are two iOS apps Driver and Customer. Drivers are broadcasting their locations and customers are subscribing.

sometimes everything looks fine on the apps and backend but the maximum time apps start throwing the error while connecting the action cable:

ActionCableClient.TransmitError.notConnected

My RubyOnRails Application is deployed over the Nginx and passenger.

Please help me to know the reason and some solution to overcome from here.

Thanks.

Viewing all 884 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>