If your MySQL slave is broken/out of sync or you just want to import the master data here are the steps you should take to jumpstart your MySQL replication.
On the slave
mysql> SLAVE REST;
On the master
root# mysqldump --routines --master-data --single-transaction --skip-add-locks --skip-lock-tables --default-character-set=utf8 database > db-backup-`date -I`.sql
Now push this dump to the slave server.
This one is just if you don't have the user or want to change the password.
mysql> grant replication slave on *.* to 'firstname.lastname@example.org identified by 'slave';
On the slave
mysql> drop schema database;
mysql> create schema database;
root# mysql database < dbdump-date.sql
We did the reset so we have to update user information too
mysql> CHANGE MASTER TO MASTER_USER='replication', MASTER_PASSWORD='slave';
mysql> START SLAVE;
mysql> SHOW SLAVE STATUS\G
and look for:
Let’s review these parameters and see their effect:
--user: This is the user which initiates the dump. Depending on other parameters, the user may need to have quite a few privileges, such as
REPLICATION CLIENT etc. Since I do not usually allow for remote root access into mysql, I create a temporary user solely for the purpose of the dump (many times it’s a one-time action), for the specific machine from which the dump is run, and provide this user with all necessary permissions.
--routines: It is really an annoyance to have to remember this flag. In contrast to –triggers, which is by default TRUE, the
--routines parameter is by default FALSE, which means if you forget it – you don’t get the stored functions and procedures in your schema.
--master-data: I always enable binary logs on the MySQL nodes I work on. While binary logs may lead to more IO operations (writing binary logs make for more disk writes, obviously, but also disable some InnoDB optimizations), may consume more disk space (once I’ve worked with a company which had such a burst of traffic, that the binary logs to completely filled their disk in less than one day). If binary logs are enabled, the
--master-data parameter allows for easy replication setup: the dump includes the
CHANGE MASTER TO MASTER_LOG_FILE='...', MASTER_LOG_POS=... statement, so no need to do stuff like
SHOW MASTER STATUS on the dumped node. Optionally, you can set
--master-data=2 to have the statement commented.
--skip-lock-tables: When working with transactional-only storage engines (InnoDB is the most popular choice, but new engines are coming: Falcon, PBXT, Transactional-Maria, SolidDB and more), these parameters allow for a non-interruptive backup, which does not place read locks on all tables. It is possible to keep on reading and writing to the database while mysqldump is running with single transaction. Running in this mode does have its penalty: more IO operations (due to MVCC’s duplication of data while many transactions access the same data for Read/Write). The server is likely to perform more slowly during the dump time.
--default-character-set=utf8: I’ve seen so many MySQL installations in which world-wide textual data was stored in the Latin1 charset than I can remember. Many developers, who are testing using standard English data, are not even aware of the issues arrising from changing the data later on to utf8. But even those who are, are usually unaware of the necessity to configure the character set on a per connection basis, or for their specific clients (JDBC or PHP connectors, etc). mysqldump is no different, and if you have non-latin text in your tables, always remember to set this option.
After the upgrade I had a ton of issues getting my rails environment back online which I could have avoided IF I just would have uninstalled all my ports.
The problem was that I had both Ruby 1.8.6 and 1.8.7 installed as ports and that was conflicting to the new 1.8.7 that was installed from the upgrade. I could not get passenger to boot after the upgrade…
SO what I did was upgrade xcode, upgrade port then force uninstall ALL ports and install the ports that I wanted. (Reboot 🙂 ) and then I installed the passenger gem using the native 1.8.7 Ruby install.
Download Tomcat binaries for Windows x64
Install Tomcat (Catalina)
- Run normal 32 bit Tomcat install but install as if it were a 64 bit install. Install to “C:\Tomcat”. Make sure to point Tomcat to a 64 bit JRE.
- If you chose to install “Tomcat Native” in the above step, replace tomcat native dll’s (found in tomcat’s bin directory) with 64 bit versions
- Replace tomcat6.exe and tomcat6w.exe (found in tomcat’s bin directory) with 64 bit versions
- Unblock the exe’s (Properties -> Unblock)
- Allow tomcat6w.exe to run as Administrator
Configure Tomcat (Catalina)
I recommend using the default 8080 and 8009 first if you are planning on none-default port install. When you have 8080 (telent) and 8009 (IIS7) working you can change the ports.
Configure IIS 7
- Open IIS 7 Manager
- Navigate to your host.
- Double click on the ISAPI and CGI Restrictions icon
- On the right hand panel, click Add…
- Point the path to your isapi_redirect.dll file and give it a description eg. tomcat
- Check “Allow extension path to execute”
- Now, click on the Default Website and navigate to ISAPI Filters.
- On the right hand panel, click on Add… and point to your isapi_redirect.dll file, give the filter a name (eg. tomcat)
- Navigate to your Default Website again
- Add a virtual directory and name it catalina, point this to the directory containing isapi_redirect.dll
- Click on the newly created virtual directory catalina
- Navigate into Handler Mappings, on the right hand side you will see a panel called Actions. Click on Edit Feature Permissions and tick the execute permission.
- Finally, navigate to your host context again and do a restart.
Thanks to http://jspors.blogspot.com/
I wanted to use a state machine plugin for my Order model and found that ttilley-aasm was a good fit for me.
Here is how I installed it:
config.gem 'ttilley-aasm', :source => 'http://gems.github.com', :lib => 'aasm'
# rake gems:install
# ./script/generate migration add_aasm_state_to_Order aasm_state:string
# rake db:migrate
And now how I test that it worked:
>> require 'aasm'
>> class Order
>> include AASM
>> aasm_initial_state :new
>> aasm_state :new
>> aasm_state :shipped
>> aasm_event :ship do
?> transitions :to => :shipped, :from => [:new]
That worked now my Order class…
class Order < ActiveRecord::Base
aasm_state :in_progress, :exit => :generate_order_number
aasm_event :ship do
transitions :to => :shipped, :from => [:in_progress], :if => :paid?
def generate_order_number; …; end
def paid?; …; end
That looks great!
After moving from Thin to passenger to save memory on one of my VM (with only 256mb every byte counts… ) I could not get paperclip to re-size images.
This was the error I got:
[paperclip] An error was received while processing: #
[paperclip] Processing medium # in the thumbnail processor.
[paperclip] An error was received while processing: #
The problem was permissions on the rails tmp directory. When I deploy using Capistrano the RAILS_ROOT/tmp directory is cleared-out so the .ruby_inline setup is re-initialized by the root user. Since I don’t run Passenger as root this throws the error above. Here is how I get around this without logging in to the server after deploy and executing a chown command:
In my RAILS_ROOT/config/environments/production.rb file I added this to the bottom:
temp = Tempfile.new(‘ruby_inline’, ‘/tmp’)
dir = temp.path
ENV[‘INLINEDIR’] = dir
This replaced the old setup that I had:
ENV[‘INLINEDIR’] = File.join(RAILS_ROOT, ‘tmp’)
So now I delete the .ruby_rails directory with the wrong permissions and create a new directory with the rails user… nice!