Ceph Object Gateway is an object storage interface built on top of librados to provide applications with a RESTful gateway to Ceph Storage Clusters. Ceph Object Storage supports two interfaces:

S3-compatible: Provides object storage functionality with an interface that is compatible with a large subset of the Amazon S3 RESTful API.

Swift-compatible: Provides object storage functionality with an interface that is compatible with a large subset of the OpenStack Swift API.

My ceph cluster setup

mon and gateway	- mon-server(192.168.1.10)
osd1 and osd2 	 - osd1(192.168.1.11)
osd3 			  - osd2(192.168.1.12)
OS				 - Ubuntu Trusty(14.04.2 LTS)

To run the Ceph object gateway service on Ubuntu 14.04 (Trusty), you should have a running Ceph cluster and the gateway host should have access to storage and public networks.

In my case, I've done the follwing in mon-server(192.168.1.10)

INSTALL APACHE/FASTCGI

On Ubuntu Ubuntu 14.04, multiverse needs to be enabled in the package resource list file

uncomment the following lines in /etc/apt/sources.list:

# deb https://archive.ubuntu.com/ubuntu trusty multiverse
# deb-src https://archive.ubuntu.com/ubuntu trusty multiverse
# deb https://archive.ubuntu.com/ubuntu trusty-updates multiverse
# deb-src https://archive.ubuntu.com/ubuntu trusty-updates multiverse

Update the package resource list:

$ sudo apt-get update

Install Apache and FastCGI:

$ sudo apt-get install apache2 libapache2-mod-fastcgi

Configure APACHE

Add a line for the ServerName in the /etc/apache2/apache2.conf. Provide the fully qualified domain name of the server machine (e.g., hostname -f):

ServerName mon-server

Enable the URL rewrite modules for Apache and FastCGI Execute the following:

$ sudo a2enmod rewrite
$ sudo a2enmod fastcgi

Restart Apache service

$sudo service apache2 start

INSTALL CEPH OBJECT GATEWAY DAEMON

Ceph Object Storage services use the Ceph Object Gateway daemon (radosgw) to enable the gateway.

To install the Ceph Object Gateway daemon on the gateway host, execute the following:

$ sudo apt-get install radosgw

Once you have installed the Ceph Object Gateway packages, the next step is to configure your Ceph Object Gateway. There are two approaches: simple and FEDERATED. I used simple in my system

Simple: A simple Ceph Object Gateway configuration implies that you are running a Ceph Object Storage service in a single data center. So you can configure the Ceph Object Gateway without regard to regions and zones.

The Ceph Object Gateway is a client of the Ceph Storage Cluster. As a Ceph Storage Cluster client, it requires:

1. A name for the gateway instance. We use 'admin' in this guide.
2. A storage cluster user name with appropriate permissions in a keyring.
3. Pools to store its data.
4. A data directory for the gateway instance.
5. An instance entry in the Ceph Configuration file.
6. A configuration file for the web server to interact with FastCGI.

The configuration steps are as follows:

Execute the following steps on the admin node of your cluster:

Create a keyring for the gateway:

$ sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring

Generate a Ceph Object Gateway user name and key for each instance. For exemplary purposes, we will use the name ‘admin’ after client.radosgw:

$ sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.admin --gen-key

Add capabilities to the key:

$ sudo ceph-authtool -n client.radosgw.admin --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring

Once you have created a keyring and key to enable the Ceph Object Gateway with access to the Ceph Storage Cluster, add the key to your Ceph Storage Cluster. For example:

$ sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.admin -i /etc/ceph/ceph.client.radosgw.keyring

Distribute the keyring to the gateway host:

$ sudo scp /etc/ceph/ceph.client.radosgw.keyring  USERNAME@GATEWAY_IP:/home/ceph
$ ssh USERNAME@GATEWAY_IP 'sudo mv ceph.client.radosgw.keyring /etc/ceph/ceph.client.radosgw.keyring'

NOTE The last step is optional if admin node is the gateway host.

CREATE POOLS

If pools already exist, no problem. If not, create all the pools listed below

$ ceph osd pool create .rgw.buckets 16 16

.rgw
.rgw.root
.rgw.control
.rgw.gc
.rgw.buckets
.rgw.buckets.index
.log
.intent-log
.usage
.users
.users.email
.users.swift
.users.uid

NOTE if write permission is given, Ceph Object Gateway will create pools automatically.

NOTE When adding a large number of pools, it may take some time for your cluster to return to a active + clean state.

When you have completed this step, execute the following to ensure that you have created all of the foregoing pools:

$ rados lspools

ADD A GATEWAY CONFIGURATION TO CEPH

Add the Ceph Object Gateway configuration to your Ceph Configuration file in admin node. The Ceph Object Gateway configuration requires you to identify the Ceph Object Gateway instance. Then, you must specify the host name where you installed the Ceph Object Gateway daemon, a keyring (for use with cephx), the socket path for FastCGI and a log file.

Append the following configuration to /etc/ceph/ceph.conf in your admin node:

[client.radosgw.admin]
host = {hostname}
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = 	/var/run/ceph/ceph.radosgw.admin.fastcgi.sock
log file = /var/log/radosgw/client.radosgw.admin.log

NOTE Here, {hostname} is the short hostname (output of command hostname -s) of the node that is going to provide the gateway service i.e, the gateway host.

NOTE The [client.radosgw.admin] portion of the gateway instance identifies this portion of the Ceph configuration file as configuring a Ceph Storage Cluster client where the client type is a Ceph Object Gateway (i.e., radosgw).

DISTRIBUTE UPDATED CEPH CONFIGURATION FILE

$ ceph-deploy --overwrite-conf config pull {gateway_hostname}
$ ceph-deploy --overwrite-conf config push osd1 osd2

COPY CEPH.CLIENT.ADMIN.KEYRING FROM ADMIN NODE TO GATEWAY HOST

$ sudo scp /etc/ceph/ceph.client.admin.keyring  USERNAME@GATEWAY_IP:/home/USERNAME
$ ssh USERNAME@GATEWAY_IP 'sudo mv ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring'

NOTE The above step need not be executed if admin node is the gateway host

CREATE A CGI WRAPPER SCRIPT

The wrapper script provides the interface between the webserver and the radosgw process. This script needs to be in a web accessible location and should be executable.

Execute the following steps on the gateway host:

Create the script:

$ sudo vi /var/www/html/s3gw.fcgi Add the following content to the script:

#!/bin/sh
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.admin

Provide execute permissions to the script:

Change file permission

$ sudo chmod +x /var/www/html/s3gw.fcgi

Create Data Directory

$ sudo mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.admin

Start rados gateway service

$ sudo /etc/init.d/radosgw start

CREATE A GATEWAY CONFIGURATION FILE

$ sudo vi /etc/apache2/sites-available/rgw.conf

Add the following contents to the file:

FastCgiExternalServer /var/www/html/s3gw.fcgi -socket /var/run/ceph/ceph.radosgw.admin.fastcgi.sock

<VirtualHost *:80>
ServerName localhost
DocumentRoot /var/www/html

ErrorLog /var/log/apache2/rgw_error.log
CustomLog /var/log/apache2/rgw_access.log combined

# LogLevel debug

RewriteEngine On

RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*) /s3gw.fcgi?page=$1&params=$2&%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]

<IfModule mod_fastcgi.c>
	<Directory /var/www/html>
	Options +ExecCGI
	AllowOverride All
	SetHandler fastcgi-script
	Order allow,deny
	Allow from all
	AuthBasicAuthoritative Off
	</Directory>
</IfModule>

AllowEncodedSlashes On
ServerSignature Off

</VirtualHost>

Disable the default site:

$ sudo a2dissite 000-default

Enable the configuration file

$ sudo a2ensite rgw.conf

Restart apache2 service

$ sudo service apache2 restart

USING THE GATEWAY

CREATE A RADOSGW USER FOR S3 ACCESS

$ sudo radosgw-admin user create --uid="testuser" --display-name="First User"

The output of the command will be something like the following:

{"user_id": "testuser",
"display_name": "First User",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{ "user": "testuser",
"access_key": "I0PJDPCIYZ665MW88W9R",
"secret_key": 	"dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"}],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": { "enabled": false,
"max_size_kb": -1,
"max_objects": -1},
"user_quota": { "enabled": false,
"max_size_kb": -1,
"max_objects": -1},
"temp_url_keys": []}

NOTE The values of keys->access_key and keys->secret_key are needed for access validation.

ACCESS VERIFICATION

install the python-boto package

$ sudo apt-get install python-boto

Create the Python script:

$ nano s3.py

import boto
import boto.s3.connection
access_key = 'YOUR_ACCESS_KEY'
secret_key = 'YOUR_SECRET_KEY'
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = '{FQDN}',
is_secure=False,
calling_format = boto.s3.connection.OrdinaryCallingFormat(),)
bucket = conn.create_bucket('my-new-bucket')
for bucket in conn.get_all_buckets():
	print "{name}\t{created}".format(
		name = bucket.name,
		created = bucket.creation_date,
)

Run the script:

$ python s3test.py

The output will be something like the following:

my-new-bucket 2015-02-16T17:09:10.000Z

Test in ruby language

To test ceph-gateway, we have use rubygem s3. Source code is in https://github.com/thomasalrin/s3

Edit https://github.com/thomasalrin/s3/blob/master/lib/s3.rb to point to your gateway_host

Revert installation

There are useful commands to purge the Ceph gateway nstallation and configuration from every node so that one can start over again from a clean state.

This will remove Ceph configuration and keys

ceph-deploy purgedata mon-server

This will also remove Ceph packages

ceph-deploy purge mon-server

IF you received the below error when you attempt to install radosgw again client.radosgw.admin exists but key does not match

Execute this to fix the error ceph auth del client.radosgw.gateway

Introduction

Test Kitchen is a test harness tool to execute your configured code on one or more platforms in isolation.

We’re going to use Test Kitchen to help us write a simple Chef cookbook, complete with tests that verify that the cookbook does what it’s supposed to do.

To initialize the opennebula driver that it has a static and declarative configuration in .kitchen.yml file.

Installing Test Kitchen

Installing Test Kitchen from RubyGems goes like this:

$ gem install test-kitchen

Now let’s verify that Test Kitchen is installed. To get the currently installed version we type:

$ kitchen version

Test Kitchen version 1.0.0

Creating a Cookbook

We already have some cookbooks in github repository.

$ git clone https://github.com/megamsys/chef-repo.git

We can create a cookbook by using knife. knife is a tool used to configure most interactions with the Chef system. We can use it to perform work on our workstation and also to connect with the Chef server or individual nodes.

The general syntax for creating a cookbook is:

$ knife cookbook create cookbook_name

Kitchen::Opennebula

A Test Kitchen Driver for Opennebula.

Installation

Download and install latest ChefDK.

$ wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.8.0-1_amd64.deb

Install chefdk

$ sudo dpkg -i chefdk_0.8.0-1_amd64.deb

Please add bin locations to your PATH:

/opt/chefdk/bin/:/opt/chefdk/embedded/bin (unix)

NOTE : Reopen console or reload your env PATH

Install kitchen with opennebula driver:

$ gem install kitchen-opennebula –no-user-install –no-ri –no-rdoc

Configuration

opennebula_endpoint

URL where the OpenNebula daemon is listening. The default value is taken from the ONE_XMLRPC environment variable, or https://127.0.0.1:2633/RPC2 if unset.

oneauth_file

Path to the file containing OpenNebula authentication information. It should contain a single line stating “username:password”. The default value is taken from the ONE_AUTH environment variable, or $HOME/.one/one_auth if unset.

template_name

Name of the VM definition file (OpenNebula template) registered with OpenNebula. Only one of template_name or template_id must be specified in the .kitchen.yml file. The default value is unset, or nil.

vm_hostname

Hostname to set for the newly created VM. The default value is driver.instance.name. For example vm_hostname :xxxx

username

This is the username used for SSH authentication to the new VM. The default value is local.

memory

The amount of memory to provision for the new VM. This parameter will override the memory settings provided in the VM template. The default value is 512MB.

wait_for

This variable is used to override timeout for Fog’s common wait_for method which states that it “takes a block and waits for either the block to return true for the object or for a timeout (defaults to 10 minutes)”.

For more details to visit testkitchen/kitchen-opennebula

Now we’ll add Test Kitchen to our project by using the init subcommand:

$ kitchen init --driver=kitchen-opennebula
  create  .kitchen.yml
  create  chefignore
  create  test/integration/default
 Successfully installed kitchen-opennebula-0.1.2
 Parsing documentation for kitchen-opennebula-0.1.2
 Done installing documentation for kitchen-    opennebula after 0 seconds
1 gem installed

The kitchen init subcommand will create an initial configuration file for Test Kitchen called .kitchen.yml.

While Test Kitchen may have created the initial file automatically, it’s expected that you will read and edit this file.

 driver:
   name: opennebula
   opennebula_endpoint: 'https://127.0.0.1:2633/RPC2'
   oneauth_file: ./.one_auth
   template_name: TEMPLATE_NAME
   vm_hostname: HOSTNAME
   username: USERNAME
   memory: 2048
   wait_for: 1000

provisioner:
   name: chef_solo

 platforms:
   - name: ubuntu-14.04
   - name: centos-7.0

 suites:
   - name: default
   run_list:
    - recipe[git::default]

To see the results of our work, let’s run the kitchen list subcommand:

$ kitchen list
    Instance            Driver      Provisioner  Last Action
    default-ubuntu-1404  opennebula  ChefSolo     <Not Created>

Test Kitchen calls this the Create Action. We’re going to be painfully explicit and ask Test Kitchen to only create the default-ubuntu-1404 instance:

$ kitchen create default-ubuntu-1404

Let’s check the status of our instance now:

$ kitchen list
    Instance                 Driver     Provisioner   Last Action
     default-ubuntu-1404      opennebula  ChefSolo     Created
Running Kitchen Converge

Now start Test Kitchen to test the cookbook

$ kitchen converge default-ubuntu-1404

It will create a opennebula VM and run the cookbooks in run_list

What is Internationalization?

Internationalization means adapting computer software to different languages, regional differences and technical requirements of a target market. Internationalization is the process of designing a software application so that it can potentially be adapted to various languages and regions without engineering changes.

The process of “internationalization” usually means to abstract all strings and other locale specific bits (such as date or currency formats) out of your application. The process of “localization” means to provide translations and localized formats for these bits.

Setup the Rails Application for Internationalization

Rails adds all .rb and .yml files from the config/locales directory to your translations load path, automatically.

Gemfile and Gemfile.lock setup
#Gemfile
	- gem "megam_api", "~> 0.65"
#Gemfile.lock
	- megam_api (0.65)
    + rails-i18n (4.0.4)
	+ i18n (~> 0.6)
	+ railties (~> 4.0)
ApplicationController setup

You make a method in app/controllers/application_controller.rb for users to change their language and create a before_filter:

before_filter :set_user_language

  def set_user_language
    I18n.locale= 'en'
  end
En.yml

The default en.yml locale in this directory contains a sample pair of translation strings:

en:
   	hello: "Hello world"

Then the sample English language loaded en.yml locale directory is,

signup:
   title: "Sign up"
   first_name_tag: "First Name"
   last_name_tag: "Last Name"
   phonenumber_tag: "Phone Number"
   email_tag: "Email"
   password_tag: "Password"
 	  login_here: "Login here"

And the reference in our view file (apps/views/index.html.erb):

<%= t('signup.title') %></u>&nbsp;&nbsp; <a href="signin"><u class="colr-denimblue"><%= t('signup.login_here') %></u></a>
Conclusion

Internalization now easy to convert any standard localized language through create new _.yml format file for required language.(Ex. ru.yml for Russian language)

The Codebox IDE helps to create powerful development environments in the cloud with a collaborative, online/offline IDE for your collaborators and your teams.This component is available as an open source project (built with web technologies) on GitHub or Stay updated on Twitter. Let we see the installation procedure.

Fork the codebox.git from your github id.

For that logging in github then enter into codebox.git. In right side top corner you can find ‘Fork’ option. Click on that.

Clone the codebox.git into your github id.
git clone https://github.com/megamsys/codebox.git
Change the directory to codebox.
cd codebox
Install npm software to access codebox dependency files
npm install
Install gulp globally
npm install gulp -g
Build the codebox with the help of installed gulp.
gulp build
Create directories codebox and packages under home directory.
mkdir /home/megam/.codebox
mkdir /home/megam/.codebox/packages
Set node path
export NODE_PATH=/home/megam/.codebox/packages
Run codebox
node ./bin/codebox.js -p 3000 -u admin:admin run ~/code/megam/workspace/

Here I run this code box IDE in the port 3000. As per your wish you can set the port and username and password. This is an example to install & run the codebox IDE. I have the contents of codebox in the directory code/megam/workspace/. In the same way you can run the codebox IDE using the path where you stored.

MySQL is the world’s most popular open source database. MySQL can cost-effectively help you deliver high performance, scalable database applications.

REPLICATION

Master-slave data replication allows for replicated data to be copied to multiple computers for backup and analysis by multiple parties. Needed changes identified by a group member must to be submitted to the designated “master” of the node. This differs from Master-Master replication, in which data can be updated by any authorized contributor of the group.

This article provides information to setting up MySQL master-slave database replication between two cloud servers.

SYSTEM SETUP

    OS	 : Ubuntu 14.04.2
    MySQL  : MySQL 5.5

SERVERS

MySQL replication needs atleast two servers, one act as master and others act as slaves. Master performs both write and read it replicates in all slaves. Slave also can write but it modified only in it’s database.It does not replicate in any slaves.

My Servers are

    server1 : 192.168.1.12
    server2 : 192.168.1.13

here we use server1 as master and server2 as slave.

Install and Configure MySQL in Master Server
    $ sudo apt-get install mysql-server

After installing MYSQL, have to configure it

	$ nano /etc/mysql/my.cnf

Change bind-address to point to the Server1 IP address.

    bind-address   = 192.168.1.12

Just remove the comment for server-id and log_bin and save the file.

	log_bin        = /var/log/mysql/mysql-bin.log

Change the Server id as an unique positive Integer.

    server-id      = 1234

note: use different id for server2. Restart the mysql service.

	$ sudo service mysql restart

Grant Replication to Slave

Create a mysql user with password identification.

$ mysql -u root -p pswd

mysql> create user 'DBUSERNAME'@'%' IDENTIFIED BY 'slavepass';

Grant Slave replication Permission to that user.

	mysql> grant replication slave on *.* to 'DBUSERNAME'@'%'

Create your own database

Create database

mysql> create database dbnew1 Create table

mysql> create table dbnew1.student(s_name varchar(20),s_id int);

Insert data whatever you want

	mysql> insert into dbnew1.student('Devid',1001);

After insertion exit mysql

    mysql> exit

Backup the MySql Database and Send to Slave

“mysqldump” is an effective tool to backup MySQL database. It creates a *.sql file with DROP table, CREATE table and INSERT into sql-statements of the source database. To restore the database, execute the *.sql file on destination database.

 Syntax:  mysqldump -u root -p[root_password] [database_name] > dumpfilename.sql

 $ mysqldump -u root -p pswd --all-databases --master-data >masterdump.sql

to verify the databases backup

 $grep CHANGE *sql  | head -1

CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=919;

send the dumpfile.sql to slave server.

   $ scp masterdump.sql 192.168.1.13:
               or
   $ sftp server2@192.168.1.13

######Install and Configure MySQL in Slave Server

$ sudo apt-get install mysql-server

After install MYSQL, have to configure it,

	$ nano /etc/mysql/my.cnf

Change bind-address to point to slave-ipaddress.

    bind-address   = 192.168.1.13

Change the Server id to an unique positive Integer.

    server-id      = 4321

Just remove the comment for server-id and log_bin and save the file.

	log_bin        = /var/log/mysql/mysql-bin.log

Restart the mysql service.

	$ sudo service mysql restart

Inform Slave about the master

$ mysql -u root -p pswd

mysql> CHANGE MASTER TO MASTER_HOST='192.168.1.12', MASTER_USER='DBUSERNAME', MASTER_PASSWORD='slavepass';

mysql> exit

verify whether you have received the dupmfile.sql before use the following command.

$ mysql -u root < masterdump.sql

now open MySQL and start the Slave.

mysql> start slave

mysql> show slave status\G;

It show the following attribute’s value like

Slave_IO_State: Waiting for master to send event
              Master_Host: 192.168.1.12
              Master_User: DBUSERNAME
          Master_Log_File: mysql-bin.000001
         Slave_IO_Running: Yes
        Slave_SQL_Running: Yes
         Master_Server_Id: 1234
	       MASTER_LOG_POS: 919

now your slave started check the master replication

insert a new row in master server

mysql> use dbnew1
mysql> insert into stu values(‘John’,1004);
mysql> select * from stu;
+-----------+-------+
| s_name 	| s_id 	|
+----------+--------+
| Devid   	| 1001 	|
| mathi		| 1002 	|
| thomas 	| 1003 	|
| john     	| 1004 	|
+------- ---+-------+
4 rows in set (0.00 sec)

show table in Slave Server

mysql> use dbnew1
mysql> select * from stu;
+-----------+-------+
| s_name 	| s_id 	|
+----------+--------+
| Devid   	| 1001 	|
| mathi		| 1002 	|
| thomas 	| 1003 	|
| john     	| 1004 	|
+------- ---+-------+
4 rows in set (0.00 sec)

MySQL Master-slave replication tested.

Thank you.