Skip to Content

Blog Archives

Radix Technolgies @ Le Forum des 100

We are happy to announce that tomorrow we will be attending LA SANTÉ DANS TOUS SES ÉTATS – 11 MAI 2017 of the Le Forum Des 100 in Lausanne, Switzerland.

This conference will tackle hot subjects such as AI, Big Data and the IoT in healthcare. With a lot of experience in providing our services to the healthcare industry, we are very excited about this event, and hope to learn a lot of new things about the new trends in healthcare.

Our sales director Pierre Alain Schmid will be attending the conference tomorrow.

For more information about our services and the Cloud you can contact mr. Schmid at:

pierre@radixcloud.com

0 1 Continue Reading →

Radix Technologies @ Amazon AWS Summit Berlin

We are very happy to announce that we will be part of this year’s Amazon Web Summit in Berlin on the 18th of May.

If you are attending, come and discuss with us about the Cloud industry, new trends as well as creative solutions that Radix can offer your business.

Our German Director of Global Partners and Alliances – Mr. Mark Nerlich will be there. (see pic below)

mark

 

You can contact Мr. Nerlich at:

mark.nerlich@radixcloud.com

or

+49 (0)163 87 87 945

 

0 2 Continue Reading →

2017 – The next year in Cloud

2017

Recently, we discussed the challenges that we saw in 2016 and how we saw Cloud evolve from 2015 to 2016. We thought it would be a good idea to discuss how we will be dealing with the challenges of 2016 and what we hope to see as trends in 2017.

  1. Train and Reinforce Expertise

A lot of companies have invested in very advanced and complex Cloud solutions, but are only using a small fraction of the features. The base solution is typically very easy to start using, but the advanced solutions require some expertise in the solutions. In addition, it is often quite complicated to migrate your data onto the new solution. But having all your companies’ data available is extremely beneficial. Finding the right partner, with the right expertise, will be very important in 2017 to ensure that you get the most out of your investment.

As the Cloud gains even more momentum in 2017, companies will not only look to improve their ROI from using the Cloud solutions, but they will also look to taking complete advantage of the Cloud solutions that they have adopted. This is exactly why Cloud experts will become a much more important part of the IT industry in 2017.

  1. Redefine our Monitoring and CMDB

Your operations team needs a new way to be able to manage Service Levels and also keep track of IT resources. Any ITIL based organisation is dealing with a lot of issues related to the business using Cloud solutions. Ensuring that you have the right monitoring solutions, and also the right CMDB solution integrated with the different providers you are working will be key for operations.

  1. Build DevOps

DevOps has been considered a tool for companies that Develop applications. But it is also a very advanced tool for the operations team. In fact, most operations teams will get more use out of DevOps tools than Development teams will. As companies get more used to the new Cloud based solutions they have started using, they will start having new demands that will be fulfilled by DevOps tools. If you have never researched DevOps, start now to ensure you can keep up with the business.

  1. Update Compliance

If you have not updated your compliance documents, this year is the year to do it. There are so many different types of Cloud solutions available on the market today, that basic compliance requirements based on the Cloud of 2010 will simply keep your company in the past. Updating your documents to ensure that the different divisions in your company can take advantage of the tools built for them will be key for businesses that want to maintain agility.

  1. Pricing Analysis

For many companies, the price of their first Cloud solution was not the most critical challenge. They were concerned about security, compliance, just basically if it will work at all. Now that they have moved into the Cloud and see that it works, they are ready to start looking at the cost of the tools they are using. Preparing a pricing analysis tool that will allow the business to easily compare price to features will be an important tool in many projects. As Cloud providers compete to offer the best in solutions, pricing will become a key factor in choosing the right solution.

2017 will be a year where companies build more control around their Cloud solutions, and refine their selection process. As companies become more comfortable with their data on the Cloud, and how they can move data to the Cloud, DevOps will have a much more important role in the enterprise. Advanced DevOps allow data to be move from one service to another, allowing you to ensure you are getting the best price while ensuring compliance of company rules. This will be the cornerstone of IT Operations and many companies will expand the role of DevOps in their companies to meet the challenges of the future.

0 1 Continue Reading →

A year in the Cloud – What we learned from 2016

2016

Cloud Computing continued to grow rapidly in 2016. As we can see here from Synergy Research, the Cloud market is growing at 25% annually. As a company operating in the Cloud market, we definitely saw a lot of new requests for migrations and new Cloud services in 2016. We also saw the services provided by our partners and suppliers increase significantly, making it a lot easier to move companies to Cloud services and provide all the features they need. As we already see from the market, 2017 will continue to grow and new service requests will continue to come in.

During 2015, we saw Cloud become an accepted solution in most enterprises, and in 2016, we saw most of our customers opt for Cloud solutions over classic delivery options. The following are the challenges that we saw our customers deal with the most:

1. Lack of Resources / Expertise

This is a problem that many companies have with Cloud solutions. Very often, we visit a company who has purchased and is using services on the Cloud, but they have not yet migrated their data into the new solution and they are not yet using all the features available to them. To be able to fully take advantage of these solutions, it is important to work with experts to be able to enable all features and integrate all your businesses data in the tool.

In 2016 we spent hundreds of hours migrating data of companies of all sizes to Cloud services that they have purchased, but did not have a full understanding of their use and features. This is why we provided a lot of consulting services as well.

2. Security and Compliance

The main issue with security that we have to do with ensuring that users are following the companies’ standard security practices for company data. Starting to use a Cloud solution can happen very quickly, often before anybody from IT has heard about it. Most users are not able to understand if what they are following company rules or not. It is important to have a process in place to manage these decisions and ensure that data is safe.

For the past couple of years we have developed several solutions to fit the latest security standards in the Cloud industry. One of them is Buncro a file sharing and collaboration platform that we have had great success with in the past year, providing it to clients that need to share their data, but still need premium security while doing so.

3. Managing Multiple Services

2016 saw many companies move to hybrid Cloud solutions. Very often, this meant using 2-3 public Cloud providers in addition to your private Cloud infrastructure. However, your users still come to you to ensure quality of service and performance. Suddenly, your team is expected to be monitoring infrastructure in multiple locations, global public networks, application performance on VM’s you didn’t create, and in addition control IT costs. Service management has a new level of complexity and must be reigned in.

Many of these challenges are quite new for company IT groups. In the past, IT was able to keep very close control over these topics and ensure that nothing happened without them knowing about it. The new service offerings make it a lot more difficult to be able to manage IT in the same way. 2016 has been a year where Business managers are taking advantage of Cloud services, and IT managers are starting to work and manage these new environments driven by business instead of IT.

1 1 Continue Reading →

Retrieve scheduled reports recipients from a Cognos database

Some time ago, we had a request from one of our clients to compile a list of all scheduled reports in their Cognos 10 environment. They had over 2000 schedules which were not maintained, so a lot of the recipients were no longer valid, some schedules needed to be stopped, etc. One way to check the recipients list in the scheduled report is to open the schedule for the specific report and check the To, CC and BCC fields. Given the sheer amount of the schedules, checking the schedules one by one was simply not an option.
What came to my mind was to compile this list from the Cognos Content Store. I started going through documentation, forums and discussions to figure out the needed database tables to construct a query. As it turned out, the Cognos Content Store is pretty much in the shadows when it comes to information about the data stored inside. I did find a lot of threads opened by people facing the same problem. It is not too hard to find a query which would generate a list of scheduled reports with their names rather than IDs, but the bottom line regarding the retrieving the recipients list on all of these places was pretty much the same. They all concluded that in order to retrieve the recipients list, you would need a third-party tool or check the schedules manually. As stated before, I could not check 2000+ schedules, and a third-party tool was a luxury I could not afford. So, I had to figure something out. After extensive digging, I came to the following finding:
Cognos Content Store has a neat table CMOBJPROPS26, and that table has a column DELIVOPTIONS. This column in massive in size, since it contains all kinds of data. When it comes to reports, it contains prompts, delivery formats AND the recipients, if any. Based on a query for the scheduled reports I found, I constructed the following one:
select ob2.cmid, c.name as className, n.name as objectName,
o.DELIVOPTIONS as DeliveryOptions
from CMOBJPROPS2 p
inner join CMOBJPROPS26 o on p.cmid=o.cmid
inner join CMOBJECTS ob on ob.cmid=o.cmid
inner join CMOBJECTS ob2 on ob.pcmid=ob2.cmid
inner join CMOBJNAMES n on n.cmid=ob2.cmid
inner join CMCLASSES c on ob2.classid=c.classid
where ACTIVE = 1 order by objectName
You can export the output of this query to Excel. The DeliveryOptions column will contain a lot of useless information (at least for me in this given situation). To ease the search, I used the nice Find and Replace option to eliminate them, for example, Find burst; and Replace With blank space. Make sure that you put the semicolon in the Find field or you may end up losing some valuable data (for example, if some of the reports have burst in the name, if you do not use the semicolon, it will be deleted as well). In the most cases, the emails are contained in the tag <item xsi:type=”bus:addressSMTP”>RECIPIENT_EMAIL</item>. However, try to find them after you remove all the other data which are not related to the email address. You may even try to adapt the original query to try and narrow down the search, however, I would strongly recommend to use it as is, and spend a bit more time clearing the Excel, rather than risking an entry loss.
If you have any additional questions regarding this, please contact me, and I’ll be happy to reply.
Jana Georgievska
DBA

0 8 Continue Reading →

RadixCloud Lead Generation Apprentice Program

We are extremely excited to show you the picture below, which has all the necessary info for you to be become a part of our RadixCloud Lead Generation Apprentice Program!

People from Macedonia, feel free to send us your CV!

Job add

0 3 Continue Reading →

Automation-Creating systems to simplify end user engagement

For the past couple of years, automation has been аll the rave in IT. Looking for ways to ease processes for the end user comes natural. It’s easier for the user, it easier for the system engineers, easier for the whole IT sector in general.

Тhis past year we have had a lot of clients looking for automation of their IT processes. One in particular, has a large number of end users, and a complex IT system running across a variety of OS’s, a lot of applications, thousands of devices and аn enormous amount of data.

But this is a challenge that we welcomed with open arms, and created a system that suits their needs best. By using a set of applications and a team of our engineers, we provided the client with a “few click” system, that can be used by all end users, regardless of their IT knowledge.

Here are the applications used for creating this system:

Connected BackUp- for full backup of end user data on their PCs.

connected backup

WebEx (Cisco)- teleconferencing application by Cisco, a leading platform in the field.

WEbex join mtg2

Citrix XenApp- virtualization of applications and desktops.

xenapp

InTune- a mobile device management platform.

intune_screenshot

SSCM- a complex application for remote control, patch management, software distribution, operating system deployment, network access protection and hardware and software inventory.

sccm74

 

By combing all these, we created a system that can be used by all the end users, with the simplicity for all, regardless of their IT knowledge.
Have no doubt, creating this system required time and hard work, but the end result was awesome. But the work those not end here. Maintaining, updating, and improving this system is a full time job, that needs a dedicated team of professionals. The amount of data that goes through these applications on a daily basis is incredible. Creating the system is a challenge, but maintaining it is challenge on a whole different level. But with the team that we have on working on this project, as well as our experience this is not a problem.

For any information regarding automation or other Radix Technologies services feel free to contact me at:

Aleksandar.Maksimovski@radixcloud.com

Aleksandar Maksimovski,
Lead Service Desk Specialist

0 1 Continue Reading →

How to fix a suspect database

One of the relatively common situations during disaster recovery is a suspect database. The suspect flag is a mode which is set to the database by SQL Server in several cases. When it comes to disaster recovery, mainly databases are marked as suspect in case of a hardware failure, improper shutdown of the DB server, DB files corruption, etc. Strangely enough, it can happen when you restore the entire DB server from a valid snapshot (it happened to me).

In such cases, you will need to bring the database back to online mode. At this point you need to be aware that during the process there may be some data loss. This can occur if there are incomplete transactions which will need to be rolled back. In general,

What you will need for the procedure is the SQL Server Management Studio. I will use the Prime database which I corrupted in the previous post 🙂

The first thing you need to do is to turn off the suspect flag (which does not mean that the DB is fixed). You will need to execute the following command:

EXEC sp_resetstatus ‘Prime’

You will get the following message:

1

The next step is to set the database into emergency mode, which will make it a read-only with the following command:

ALTER DATABASE Prime SET EMERGENCY

2

Then, perform a consistency check. The output will display any possible errors.

DBCC checkdb(‘Prime‘)

3

In the next step we do a rollback of any pending transactions. This step also brings the DB in a single user mode.

ALTER DATABASE Prime SET SINGLE_USER WITH ROLLBACK IMMEDIATE

4

Finally, we will correct any reported errors. Keep track of the time, since this may be a lengthy process.

DBCC CheckDB (‘Prime‘, REPAIR_ALLOW_DATA_LOSS)

5

To wrap up the procedure, we will need to enable the multi user access and bring it online.

ALTER DATABASE Prime SET MULTI_USER

6

If we check the DB status now, it will be shown as online. We can browse through the tables and query data:

select DATABASEPROPERTYEX(‘Prime‘, ‘status’)

7

 

Jana Georgievska,

DBA

0 1 Continue Reading →

How to add SSL encryption to your website for free

Anyone that has set up a web or an application server knows how challenging it is to deal with requesting, verifying and install an SSL/TLS certificate is. Fast forward into the future of today…

Let's Encrypt!
Let’s Encrypt is a certificate authority that has been created by the Linux Foundation with community support to tackle these challenges, a big part of EFF’s mission to encrypt the Web. They claim: “No validation emails, no complicated configuration editing, no expired certificates breaking your website. And of course, because Let’s Encrypt provides certificates for free, no need to arrange payment.”

There are 1,358,780 certificates issued to date and that number is growing by the minute. You can check all the issues certificates here.

The official client software “Certbot” is easy to use and completely open source.
certbot
You can grab it and see the help file using the following command:

$ git clone https://github.com/certbot/certbot && cd certbot && chmod a+x certbot-auto && ./certbot-auto —help

To create a certificate and reconfigure apache to use it on a Centos6 machine this should do the trick.

sudo ./certbot-auto --apache --email youremail@yourdomain.tld --agree-tos --webroot -w /var/www/html/ -d subdomain.domain.tld -d www.subdomain.domain.tld

For more advanced and automated configuration instruction visit the Certbot homepage. If you need any help leave a reply in the comment section.

Martin Markovski,
Director Of Technology

0 4 Continue Reading →

How to corrupt your database

In the highly unlikely world of DBA testing scenarios, sometimes you will need to corrupt your MS SQL database. One of these scenarios could be that you want to check how long it would take to recover the database during disaster recovery. What will be described here is only for testing purposes and MUST NOT be done on any production system. The entire process was conducted on a local test environment.

There are several procedures which you can use, mainly through updating the status of the database in the system tables, which does not always work. There are cases where the needed options are deprecated in the SQL version, for example, in MS SQL Server 2005, the extended option that would allow running modification queries against the system tables was removed. I ran into this particular case when I tried to make a SUSPECT database on a MS SQL Server 2008 R2. Even though the statement to allow updates went smoothly:

sp_configure “allow updates”, 1

When I tried to change the status of the database named Prime:

UPDATE sysdatabases SET STATUS = 320 WHERE name = ‘Prime’

I received the following error:

Msg 259, Level 16, State 1, Line 1

Ad hoc updates to system catalogs are not allowed.

In that particular moment, I did not have the time to troubleshoot this error, and I needed the Prime database marked as SUSPECT.

As it turns out, the easiest way to get a SUSPECT database is to mess up the files. This works regardless of the MS SQL version.

 

First, restore a database from a good backup. In my Management Studio, I have the Prime database which I intend to place in a SUSPECT status:

Image 1

You can check the status of the database using the statement:

select databasepropertyex(‘Prime’, ‘status’)

Image 2

Or see if you can list and query the tables. So my Prime database is now ONLINE, which means that it is alive and well.

Next, shut down the MS SQL services from the Services panel in Windows.

Locate the .ldf file of your database. In my case, this is the Prime_log.ldf file. Open the file for editing (I used Notepad):

Image 3

And make an adjustment. I added a new row in the file and typed 12345:

Image 4

Save the file and start the MS SQL services again. Open the Management Studio and execute the statement to check the database status again:

Image 5

As you can see, the Prime database is now a SUSPECT 😀

If you try to expand the database node, you will get the following error:

Image 6

Congratulations, you have successfully corrupted your database 😀

 

Jana Georgievska,

DBA

 

 

0 2 Continue Reading →