Sunday, January 31, 2016

Cassandra - Cannot proceed on repair because a neighbor is dead: session failed/Endpoint Died

Overview

Many of you would have faced these error during nodetool repair -

Repair session 2a703be0-7722-11e5-ac24-c5a3c7c6e3cf for range (5995916511884239066,6033116056473427945] failed with error java.io.IOException: Cannot proceed on repair because a neighbor (/<node IP>) is dead: session failed.

Repair session 28c63560-7722-11e5-ac24-c5a3c7c6e3cf for range (-135025182178679597,-28481663575762586] failed with error java.io.IOException: Endpoint /<IP of node> died

and By looking at this error we get scared and immediately execute "nodetool status" command to verify the health of cluster. And everything seems good there still we goto that node to verify node is actually not dead :)

Well i did faced this issue and did ran here and there to verify my nodes are healthy and they were.. then why did i receive this error ONLY during repair.

After digging deeper i identified few facts:


1. We had very large partitions (30 - 50GB) (nodetool cfstats <keyspace>.<CF>)
2.In compare to that we use default Heap configuration i.e. 8GB HEAP which doesn't seems to be enough for this big partitions.
3. Noticed whenever CPU goes high it was the JVM which was eating up the CPU (used JCONSOLE,jmc.exe to verify the same)
4. Noticed we had very high read latency -  (nodetool cfstats <keyspace>.<CF>)
5. I have noticed whenever repair executes (mostly when it executes at same time on multiple nodes) It initiate COMPACTION on such a large partitions which actually needed more HEAP than what we have allocated resulting FailureDetector.instance.isAlive(endpoint) return a failure.

Reference:


 // Checking all nodes are live
        for (InetAddress endpoint : endpoints)
        {
            if (!FailureDetector.instance.isAlive(endpoint))
            {
                String message = String.format("Cannot proceed on repair because a neighbor (%s) is dead: session failed", endpoint);
                differencingDone.signalAll();
                logger.error("[repair #{}] {}", getId(), message);
                throw new IOException(message);
            }
        }


6. We noticed for read/write heavy loads cluster chances are the default tenuring threshold flag in GC setting is very low (in our case it was set to default i.e. 1 [JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=1"]) causing premature promotions. This could be very likely a symptom of our problem. We decided to try increasing the tenuringthreshold to a large value to prevent premature promotions so that ParNew collects most garbage in young generation (jstat).

The tenuring threshold can also be noted in gc.log file.

~ $ cat /var/log/cassandra/gc.log.9 | grep age | tail -10
- age   1:     129160 bytes,     129160 total
- age   1:     127672 bytes,     127672 total
- age   1:     128600 bytes,     128600 total
- age   1:     128784 bytes,     128784 total
- age   1:       1984 bytes,       1984 total
- age   1:     137176 bytes,     137176 total
- age   1:     128592 bytes,     128592 total
- age   1:     128504 bytes,     128504 total
- age   1:     127744 bytes,     127744 total
- age   1:     128256 bytes,     128256 total


7. We also noticed the GC was being initiated very frequently causing very frequent application pause.

~ $ cat /var/log/cassandra/gc.log.9 | grep application

Conclusion:

We understood that its is being caused by overloaded heap and made some change as mentioned below in order to resolve this issue:

Increased MAX_HEAP_SIZE from 8GB to 16GB and accordingly increased HEAP_NEWSIZE to 4GB (MAX_HEAP_SIZE/4), we also did increased the MaxTenuringThreshold from 1 to 5 (though to increase it upto 20 on performance observation)

MAX_HEAP_SIZE="16G"

HEAP_NEWSIZE="4G” 

JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=5"

 The above changes will have following benefits: 

  • This will increase the amount of work ParNew has to do, but it prevents objects from getting promoted quickly.
  • Increased size of ParEDEN will also reduce Minor GC frequency.
  • ParNew will collects most garbage in young generation resulting less Major GC.
  • This will also increase the Survivor size (HEAP_NEWSIZE/8) resulting more reusable objects.

Result:

We did noticed tremendous  improvements:

  • Application pause has been decreased from 80times in a minute to 6times in a minute.
  • Noticed the objects are living longer will result in less major GC: 
- age   1:     356112 bytes,     356112 total- age   2:     225128 bytes,     581240 total- age   3:      59760 bytes,     641000 total- age   4:      61544 bytes,     702544 total- age   5:      71984 bytes,     774528 total 

  • CMS activity after changes dropped on floor.
  • Non Heap memory usage dropped drastically.
  • Heap utilization stabilized and started begin under 50% consistently.
Above all our repair started completing successfully without any heap/cpu spike... Huff what a relief... :)

Hope this help!!

Cheers!!!

Monday, January 18, 2016

Cassandra Cluster Migration to a New Cluster V2.2.4

Overview

This page describe how to migrate from an existing cluster to a new cluster. Several times we have a situations where we want to move the entire cluster to a new cluster (reason could be anything; better hardware, moving away from existing data center etc.) without taking any downtime and ensuring all data gets migrated to new cluster, we had the same requirement and we decided to go with below mentioned strategy which worked like a charm.

This whole process is online and without impacting client, Old DC can be decommissioned anytime but as a best practice we should be taking a snapshot on each node of old dc before decommissioning.

This involve 2 major steps:

  1. Build a new Datacenter - Build a parallel new Data center with number of nodes you want.
  2. Decommission old Data center (old cluster nodes).

1. Build a new Datacenter 

This further involve following steps:

1. Ensure that we are using NetworkTopologyStrategy for all of our keyspaces (excluding few system keyspaces)

2. Make necessary modification in each new nodes config files same as per the nodes of old data center (Fastest way is to copy config files from existing node to new node and make node specific changes in config.)

Important configs are:


  • Cluster name should be same as existing cluster name.
  • Specify the seed nodes, don’t forget to add new DC's node as seed node.
  • Specify/verify necessary values for node IP address same as existing nodes - listen_address, rpc_address, broadcast_rpc_address
  • Set auto_bootstrap: false in cassandra.yaml file. This can be specified anywhere in cassandra.yaml file. This prevents the new nodes from attempting to get all the data from the other nodes in the data center.
  • Set endpoint_snitch same as existing node and make entry into respective topology file, in our case we use propertyFileSnitch hence new Datacenter and node entry need to be made in conf/cassandra-topology.properties across all nodes old/new DC.
  • Make sure to configure similar number of num_token.
  • Make sure to enable authentication if it is enabled on existing nodes.
  • Specify data and commitlog directory.
  • 3. Make sure to clean data directory and commitlog directory on new nodes, This will ensure a clean bootstrap.

rm -rf /cassandra/data/*
       rm -rf /cassandra/log/*

4. Start Cassandra on each new nodes one by one, during bootstrap it will fetch all the keyspace and tables schemas but not the data.

5. Verify keyspaces are fetched by looking at data directory or by execute "desc keyspaces" command on new nodes, check "system.log" for any error.

6. Execute "nodetool status" to check new datacenter & nodes are showing up with healthy state but LOAD should be very less since data has not been streamed yet (make a note of it we will compare it after streaming.)

Please see below new datacenter "DC2" having all node up but LOAD across all new node is very less in compare to nodes in existing DataCenter "datacenter1".

Datacenter: DC2
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns    Host ID                               Rack
UN  1**.27.7*.***  78.56 KB   12           ?       7b9ca516-19aa-432b-ac82-8d8fef44beef  RAC1
UN  172.27.74.135  168.65 KB  12           ?       9b47f3eb-b871-4f6b-a04f-894c50dffb5f  RAC1
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns    Host ID                               Rack
UN  172.27.74.116  228.66 KB  12           ?       d4797291-68e5-428a-9e87-94d5ab1125d7  rack1
UN  172.27.74.55   234.22 KB  12           ?       55b08035-7517-4cea-8908-5825ef097389  rack1


7. Alter All Keyspaces and modify replication factor to replicate to new data center too. This is applicable for system_auth keyspace too.

ALTER KEYSPACE tutorialspoint WITH replication = {'class': 'NetworkTopologyStrategy','datacenter1': 2,'DC2':1};


8. Execute "nodetool rebuild <source-dc-name>" on each new node to stream data that nodes owns/copy of data that node supposed to own as per RF. This can be verified in datadirectory. Nodetool rebuild only streams data from a single source replica per range.

  • This command need to be executed on each NEW node.
  • This can be executed in parallel on multiple NEW nodes depending on IO.

9. On rebuild against each new node, Verify LOAD owned by each new node by executing "nodetool status".

Please see below new datacenter "DC2" having all node up and LOAD across all new node is similar to the nodes of DataCenter "datacenter1".


Datacenter: DC2
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns    Host ID                               Rack
UN  172.27.74.124  201.81 KB  12           ?       7b9ca516-19aa-432b-ac82-8d8fef44beef  RAC1
UN  172.27.74.135  232.75 KB  12           ?       9b47f3eb-b871-4f6b-a04f-894c50dffb5f  RAC1
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns    Host ID                               Rack
UN  172.27.74.116  253.91 KB  12           ?       d4797291-68e5-428a-9e87-94d5ab1125d7  rack1
UN  172.27.74.55   227.61 KB  12           ?       55b08035-7517-4cea-8908-5825ef097389  rack1


10. Remove "auto_bootstrap: false" in the cassandra.yaml file from all new node. Returns this parameter to its normal setting so the nodes can get all the data from the other nodes in the data center if restarted.

Important:
  • Setting up replication factor to replicate to new DC followed by nodetool rebuild ensures all the data has been copied over to new DC nodes and new DC is ready to work independently hence old DC can be decommissioned safely without worrying for data loss.
  • This whole process is online and without impacting client, Old DC can be decommissioned anytime but as a best practice we should be taking a snapshot on each node of old dc before decommissioning.

2. Decommission old Data center (old cluster nodes). 

This further involve following steps:


  1. Make sure to take snapshot on each node of old DC to be on safer side (Optional).
  2. Make sure no clients are still writing to any nodes in the data center (Application tier load balancing through driver.)
  3. Run a full repair on new nodes (DC2 nodes), This ensures that all data is propagated from the data center being decommissioned. This is not required if decommission is being done immediately after adding new DC.
  4. Alter keyspace to remove old datacenter from replication strategy, so they no longer reference the data center being removed.
  5. Run "nodetool decommission" (Causes a live node to decommission itself, streaming its data to the next node on the ring.) on every node in the data center being removed one by one.
  6. Shutdown cassandra service.
  7. Remove old node entry from snitch file (conf/cassandra-topology.properties).
  8. Remove node entry from seed node list in "cassaandra.yaml" file of each new node.
  9. Do a rolling restart to ensure everything come up nicely.


Monday, January 11, 2016

Role Based Access Control (RBAC) Cassandra 2.2.4 Explained with Example

Overview



Cassandra has recently launced most awaiting feature "Role based access control (RBAC)" but there are very limited information available on web and people actually stuck at practicle scenarios viz.

1. How to migrate from legacy authentication to RBAC.
2. Is there any changes need to be made in cassandra.yaml file.
3. What table structure changes been made to system_auth keyspaces.
4. What about legacy table.
5. We have legacy table as well as new table also and creating a new user doesnt work.
6. What are the new commands for RBAC.

Here today i am trying to combined various web resources along with my test cases which might be helpful for you guys while implementing RBAC.

Few Web references:

IMPORTANT FACTS:


  1. Legacy tables: users, credentials, permissions
  2. New tables: roles, role_members, role_permissions, resource_role_permissons_index
  3. Ensure following parameter is enabled in cassandra.yaml - “role_manager: CassandraRoleManager” 
  4. During upgrade from an earlier version the upgrade of users to role based system happens automatically. The same can be verified in system.log Explained below in detail.
  5. The migration doesn’t impact existing users and their login as legacy tables are functioning till superuser doesn’t drop them.
  6. If we do not drop legacy tables system will have login discrepancy, old users will be working fine but if you create a new role it wont since system is still pointing to legacy tables.
  7. It is vital to drop legacy tables after verification of user/permission migration, this can be done in system.log as well as data of legacy and new table can be compared.
  8. As soon as legacy tables gets dropped Cassandra switch itself to new role bases system and any role being created going forward should be functioning (a node bounce is not required.)
  9. In order for database users to identify as a particular role at login, that role must have its LOGIN attribute set to true, this prevents users inadvertently logging in under the identity of a group.
  10. Another interesting aspect to this is that the creator of a role (the role the database user who issues the CREATE ROLE statement is logged in as), is automatically granted permissions on it. This enables users with role-creation privileges to also manage the roles they create, allowing them to ALTER, DROP, GRANT and REVOKE them. This automatic granting of ‘ownership’ permissions isn’t limited to roles either, it also applies to database objects such as keyspaces, tables (and soon to user defined functions). This largely removes the requirement to have any active superuser roles, which reduces the risk of privilege escalation. See CASSANDRA-7216 and CASSANDRA-8650 for full details.

HOW TO:

The authentication & authorization subsystems have been redesigned to support role based access control (RBAC), resulting in a change to the schema of the system_auth keyspace. See below for more detail. For systems already using the internal auth implementations, the process for converting existing data during a rolling upgrade is straightforward. As each node is restarted, it will attempt to convert any data in the legacy tables into the new schema. Until enough nodes to satisfy the replication strategy for the system_auth keyspace are upgraded and so have the new schema, this conversion will fail with the failure being reported in the system log.
System Log:
INFO  [OptionalTasks:1] CassandraRoleManager.java:410 - Converting legacy users
INFO  [OptionalTasks:1] CassandraRoleManager.java:420 - Completed conversion of legacy users
INFO  [OptionalTasks:1] CassandraRoleManager.java:425 - Migrating legacy credentials data to new system table
INFO  [OptionalTasks:1] CassandraRoleManager.java:438 - Completed conversion of legacy credentials
INFO  [OptionalTasks:1] CassandraAuthorizer.java:396 - Converting legacy permissions data
INFO  [OptionalTasks:1] CassandraAuthorizer.java:435 - Completed conversion of legacy permissions
During the upgrade, Cassandra's internal auth classes will continue to use the legacy tables, so clients experience no disruption. Issuing DCL
statements during an upgrade is not supported. Once all nodes are upgraded, an operator with superuser privileges should drop the legacy tables, system_auth.users, system_auth.credentials and system_auth.permissions. Doing so will prompt Cassandra to switch over to the new tables without requiring any further intervention. While the legacy tables are present a restarted node will re-run the data conversion and report the outcome so that operators can verify that it is safe to drop them.

  • Authentication & Authorization APIs have been updated to introduce roles. Roles and Permissions granted to them are inherited, supporting role based access control. The role concept supercedes that of users and CQL constructs such as CREATE USER are deprecated but retained for compatibility. The requirement to explicitly create Roles in Cassandra even when auth is handled by an external system has been removed, so authentication & authorization can be delegated to such systems in their entirety.
  • In addition to the above, Roles are also first class resources and can be the subject of permissions. Users (roles) can now be granted permissions on other roles, including CREATE, ALTER, DROP & AUTHORIZE, which removesthe need for superuser privileges in order to perform user/role management operations.
  • Creators of database resources (Keyspaces, Tables, Roles) are now automatically granted all permissions on them (if the IAuthorizer implementation supports this).

 Various permission in RBAC:

GRANT
Object
CREATE
keyspace, table, function, role, index
ALTER
keyspace, table, function, role
DROP
keyspace, table, function, role, index
SELECT
keyspace, table
MODIFY (INSERT, UPDATE, DELETE, TRUNCATE)
keyspace, table
AUTHORIZE (GRANT PERMISSION, REVOKE PERMISSION)
keyspace, table, function, role
DESCRIBE
LIST ROLES
EXECUTE (SELECT, INSERT, UPDATE)
functions

RBAC Implementation & Testing.


#Check existing roles.
admin@cqlsh:system_auth> LIST ROLES;

role      | super | login | options

--------------------+--------

     admin |  True |  True |        {}

 cassandra |  True |  True |        {}

(2 rows)

#List all roles of user(role) admin.
admin@cqlsh:system_auth> LIST ROLES of admin;

 role  | super | login | options

----------------+--------

 admin |  True |  True |        {}

(1 rows)

#Create a login enabled super user role.. Ideally creating a user.
admin@cqlsh:system_auth> CREATE role nnishant WITH PASSWORD = 'abcd@123' AND LOGIN = true and SUPERUSER = true ;
#List all roles of user(role) nnishant. Notice the error, it says role nnishant doesn’t exists, however we have just created it.
admin@cqlsh:system_auth> LIST ROLES of nnishant;

InvalidRequest: code=2200 [Invalid query] message="<role nnishant> doesn't exist"

#Lets list out all the roles..so we can see nnishant here but wait.. Why it is showing super and login FALSE??
admin@cqlsh:system_auth> list roles;

 role      | super | login | options

--------------------+--------

     admin |  True |  True |        {}

 cassandra |  True |  True |        {}

  nnishant | False | False |        {}

(3 rows)

#Let's check in system table system_auth.roles, here it is showing it correctly? So where is the problem? Let me tell you if you try to login using “nnishant” through cqlsh it fails since till now cassandra has not started using new system tables.
admin@cqlsh:system_auth> SELECT * from roles;

 role      | can_login | is_superuser | member_of | salted_hash

-----------------------------------------------------------------------------------------------------

  nnishant |      True |         True |      null | $2a$10$ZgXpLYZ3.9QE5XwtHw6yQe5pB2AGpezSS9hayzPevBjprZ.JuDOnW

     admin |      True |         True |      null | $2a$10$Fg4zH7yFm/IN8kg0dqPR3ODQkY83Oj8SFGy7R5hnSOzqanl9LZ8iC

 cassandra |      True |         True |      null | $2a$10$EAPR.YFjULRORyc/amQXAuS2J2atefzrzT7JZPJ8uy25n7Q6nS9bm

(3 rows)

#lets create another user test1 with login enabled and having superuser privileges.
admin@cqlsh:system_auth> create role test1 WITH PASSWORD = 'abcd@123' AND LOGIN = true AND SUPERUSER = true;
#lets list out the roles of user “test1” and again we get the same error.
admin@cqlsh:system_auth> LIST ROLES of test1;

InvalidRequest: code=2200 [Invalid query] message="<role test1> doesn't exist"

So where is the problem??

#lets verify the tables in system_auth, tables highlighted in red are legacy tables and green ones are new tables.
admin@cqlsh> USE system_auth;

admin@cqlsh:system_auth> desc tables;

role_members  roles             resource_role_permissons_index  permissions

users         role_permissions  credentials

#Lets drop the legacy tables; Please note; no need to bounce nodes after this operation, cassandra automatically and immediately switches to new tables.
admin@cqlsh:system_auth> DROP TABLE credentials;

admin@cqlsh:system_auth> DROP TABLE permissions;

admin@cqlsh:system_auth> DROP TABLE users;

admin@cqlsh:system_auth> desc tables;

role_members  roles  role_permissions  resource_role_permissons_index

#now execute LIST ROLES one again.. BINGO we see correct data now.
admin@cqlsh:system_auth> LIST ROLES;

 role      | super | login | options

--------------------+--------

     admin |  True |  True |        {}

 cassandra |  True |  True |        {}

  nnishant |  True |  True |        {}

     test1 |  True |  True |        {}

(4 rows)

#This also shows correct data.
admin@cqlsh:system_auth> LIST ROLES of test1;

 role  | super | login | options

----------------+--------

 test1 |  True |  True |        {}

(1 rows)

admin@cqlsh:system_auth> exit

#Let's try to connect using user test1 which we had created. And we are in..
10:47 AM(cass@ubuntu:)/cassandra/data/system_auth: cqlsh -u test1

Password:

Connected to NAV at 127.0.0.1:9042.

[cqlsh 5.0.1 | Cassandra 2.2.4 | CQL spec 3.3.1 | Native protocol v4]

Use HELP for help.


Example on setting permission and user/role mapping.

#Create a password less role dedicated for DEV team.
test1@cqlsh> create ROLE ROLE_DEV;
#Verify if role created as desired.
test1@cqlsh> LIST ROLES;

 role      | super | login | options

--------------------+--------

     admin |  True |  True |        {}

 cassandra |  True |  True |        {}

  nnishant |  True |  True |        {}

  role_dev | False | False |        {}

     test1 |  True |  True |        {}

(5 rows)

test1@cqlsh> LIST ROLES of role_dev;

 role     | super | login | options

-------------------+--------

 role_dev | False | False |        {}

(1 rows)

#Grant select permission to ROLE_DEV against all keyspaces;
test1@cqlsh> GRANT SELECT ON ALL KEYSPACES TO role_dev;

test1@cqlsh> LIST ROLES of role_dev;

 role     | super | login | options

-------------------+--------

 role_dev | False | False |        {}

#Verify GRANT we have assigned to this role.
test1@cqlsh> LIST ALL PERMISSIONS OF role_dev;

 role     | username | resource        | permission

--------------------------------+-----------

 role_dev | role_dev | <all keyspaces> |     SELECT

(1 rows)

#lets create user for DEV team with login enabled but no superuser.
test1@cqlsh> CREATE ROLE alex WITH PASSWORD = 'abcd@123' AND LOGIN = true ;
#Verify the same.
test1@cqlsh> LIST ROLES OF alex;

 role | super | login | options

---------------+--------

 alex | False |  True |        {}

(1 rows)

#Check if alex has any permission on database, note we havent assigned any till now and that’s what we get in result.
test1@cqlsh> LIST ALL PERMISSIONS OF alex;

 role | resource | permissions

-------------------------

(0 rows)

#Now add DEV team user to its group (role_dev)
test1@cqlsh> GRANT role_dev to alex;
#Now let's again verify the permission, and here we go.. We can see alex has got SELECT permission on all keyspaces.
test1@cqlsh> LIST ALL PERMISSIONS OF alex;

 role     | username | resource        | permission

--------------------------------+-----------

 role_dev | role_dev | <all keyspaces> |     SELECT

(1 rows)

Similarly we can add multiple team members to their respective groups. At the same time we can change the permission of entire group by executing a single command rather that executing it against each user.
For example:
We have 50 user mapped to dev group (role_dev). At this point all those 50 users has SELECT permission against ALL KEYSPACES but now there is a requirement to give MODIFY permission to all those 50 users on a specific keyspace "test", this can be simply achieved by executing below mentioned single command. So instead of executing grant against 50 users we achieved it by executing it against group. Another beauty of role based access control :)
admin@cqlsh> GRANT MODIFY ON Keyspace test to role_dev;

admin@cqlsh> LIST ALL PERMISSIONS OF role_dev;

 role     | username | resource        | permission

--------------------------------+-----------

 role_dev | role_dev | <all keyspaces> |     SELECT

 role_dev | role_dev | <keyspace test> |     MODIFY

(2 rows)

admin@cqlsh> LIST ALL PERMISSIONS  OF alex;

 role     | username | resource        | permission

--------------------------------+-----------

 role_dev | role_dev | <all keyspaces> |     SELECT

 role_dev | role_dev | <keyspace test> |     MODIFY


#Now let's login through dev user alex
test1@cqlsh> exit

10:58 AM(cass@ubuntu:)/cassandra/data/system_auth: cqlsh -u alex

Password:

Connected to NAV at 127.0.0.1:9042.

[cqlsh 5.0.1 | Cassandra 2.2.4 | CQL spec 3.3.1 | Native protocol v4]

Use HELP for help.

#let's check if roles are visible to alex? Yes he has access to view roles on which he has access but not all. Another level of security.
alex@cqlsh> LIST ROLES ;

 role     | super | login | options

-------------------+--------

     alex | False |  True |        {}

 role_dev | False | False |        {}

(2 rows)

But wait if alex query to system table directly he can see everything.. Bingo we have a security breach.. And a bug obviously :)
So "ALL KEYSPACE" include system Keyspace too, until it doesn’t get fixed in coming version we should avoid using "ALL KEPSPACE" for any app users.
alex@cqlsh> select * from system_auth.roles;

 role      | can_login | is_superuser | member_of    | salted_hash

--------------------------------------------------------------------------------------------------------

  nnishant |      True |         True |         null | $2a$10$ZgXpLYZ3.9QE5XwtHw6yQe5pB2AGpezSS9hayzPevBjprZ.JuDOnW

     admin |      True |         True |         null | $2a$10$Fg4zH7yFm/IN8kg0dqPR3ODQkY83Oj8SFGy7R5hnSOzqanl9LZ8iC

  role_dev |     False |        False |         null |                                                         null

     test1 |      True |         True |         null | $2a$10$i3LeXdJ2y/8iMylHc8Ob6ebMWJOE9Rn3NWbxeokjreKRf4664dpT.

 cassandra |      True |         True |         null | $2a$10$EAPR.YFjULRORyc/amQXAuS2J2atefzrzT7JZPJ8uy25n7Q6nS9bm

      alex |      True |        False |
{'role_dev'}

| $2a$10$Afre.n.by.Bg0Llf5txiS.9sswymcaZ9BsgbmLvi5IWbGsVJ8Msqi

(6 rows)



Stay tuned guys....
Monday, August 18, 2014

SQL Server Bulk Session kill script

Ask for SQL server instance, login and hostname for which the sessions need to be killed and replace the value of @login and @host in below script and execute it on the servers where the session need to be killed.

declare @login varchar (200)
declare @host varchar (200)
declare @id int
declare @host_desc varchar(200)
declare @login_desc varchar (200)
declare @str nvarchar(200)
/* Replace login and hostname value*/
SET @login = '<login-need-to-be-killed>'
SET @host = '<login-from-which-host>'


DECLARE cur_kill CURSOR
STATIC FOR
SELECT  spid SPID,
        convert(sysname,rtrim(loginame)) Login,
        convert(varchar(50),rtrim(hostname)) Host
    FROM
        master.dbo.sysprocesses a (nolock)
        inner join master.dbo.sysdatabases b (nolock) on a.dbid=b.dbid
    WHERE
        spid > 50 and loginame = @login and hostname = @host
    ORDER BY
        cpu desc
      
        OPEN cur_kill
IF @@CURSOR_ROWS > 0
BEGIN
FETCH NEXT FROM cur_kill INTO @Id, @login_desc, @host_desc
WHILE @@Fetch_status = 0
BEGIN
set @str = 'kill ' + cast(@id as nvarchar(max))
--print @str
exec (@str)
PRINT '[KILLED] - ID : '+ convert(varchar(20), @Id ) + ', Login : '+@login_desc+ ', Host : '+convert(varchar(20),@host_desc)
FETCH NEXT FROM cur_kill INTO @Id, @login_desc, @host_desc
END
END
CLOSE cur_kill
DEALLOCATE cur_kill

In Order to run a trace against SQL Server you must be a member of sysadmin. (Enable trace access without granting sysadmin role)

Use below query to enable trace access without granting sysadmin role.


Use Master;
GRANT VIEW SERVER STATE TO [Domain\ADGroup or User];
GRANT SHOWPLAN TO [Domain\ADGroup or User];
GRANT ALTER TRACE TO [Domain\ADGroup or User];




Checkout Deal of the Day on Amazon.

SQL Query to get IP address of local & client machine



SELECT
   CONNECTIONPROPERTY('net_transport') AS net_transport,
   CONNECTIONPROPERTY('protocol_type') AS protocol_type,
   CONNECTIONPROPERTY('auth_scheme') AS auth_scheme,
   CONNECTIONPROPERTY('local_net_address') AS local_net_address,
   CONNECTIONPROPERTY('local_tcp_port') AS local_tcp_port,
   CONNECTIONPROPERTY('client_net_address') AS client_net_address;
Tuesday, April 2, 2013

Facebook Feed is not getting displayed in latest release of Skype - version 6.3

Hi Guys, Yes i know this post is something out of the scope of this blog but still i though to write this as i believe most of us uses Skype for communication, and if you have upgraded to Skype latest version 6.3 then you must have noticed Facebook Feed Disconnection on Skype Home Page which is very annoying.

I tried to dig it a bit deeper and found the reason, but don't be happy i do not have the fix for this :(

When you first connect to Facebook using Skype, it ask for some permission. On of them is Permission to access your data when you are offline which has been Deprecated by Facebook recently.




You might see Facebook wall for a while and suddenly it goes and you left with the prompt to connect to Facebook again :)


If you wish to dig it deeper then Login to Facebook and go to Security Setting -> App Setting click on edit against Skype.


And you will get actual error here:





Guys help me with the solution if you have any? or i hope Microsoft/Skype Team will see my post and soon release a fix for this ;)


 
© Copyright 2010-2012 Learn MySQL All Rights Reserved.
Template powered by Blogger.com.