Tag : Server

5 New Tech Tested Products for Your Business

Ever wondered what the best in tech products are at this very moment? The experts at Network World weigh in and give us a little glimpse of the newest innovations on the market.

Vidder PrecisionAccess – By rendering applications invisible to unauthorized users PrecisionAccess does a fantastic job at preventing application hacking. Even with stolen credentials hackers can’t access protected applications with unauthorized devices.precision-vpn_clip_image002

VeloCloud SD-WAN – VeloCloud provides a hybrid WAN solution that works with MPLS private links as well as ATT-U-Verse with cable or any broadband DSL links. One tech pro reported an increase from almost zero network visibility to nearly 100% network visibility. A great tool for IT management across multiple locations without staff needing to be onsite at all times. Facilitates communication and network visibility.  velocloudlogo

Cisco Identity Services Engine (ISE) – With so many features that help with managing user-facing ports and devices, whats not to love about Cisco ISE. One huge factor reported by tech pros is the integration of TACACS within Cisco ISE, making it easy to run Cisco ISE as a Radius server or TACACS server for network devices. In addition to this, Cisco ISE significantly improves management of devices especially restricting machines from devices and sites they are not permitted to visit.Cisco_ISE

 

 

 

 

 

 

 

 

Intermedia SecuriSync – For backup and file sharing SecuriSync is the way to go. As a two-in-one tool for consolidated file backup and management of continuous file backups, Intermedia SecuriSync makes relevant files easier to access as they are all stored in a secured shared folder. If you have team members spread across different locations, this tool is very helpful in making sure the data is always backed up and kept secure. One platform with a master source keeps project collaboration as safe as it can be.securisync-logo-247x300

OpenSpan Transformation Platform – OpenSpan collects all employee desktop activities both productive and nonproductive, including time away from the computer. This platform allows businesses to evaluate from employee activities how employees work best and what can be improved upon in order to drive down operational costs and maximize revenue. Providing data about employee activities takes away the need for manual employee logs. Lack of employee logs that need to be analyzed by supervisors for key performance indicators (KPIs), such as call volumes, proves to be a huge time saver. OpenSpan Transformation Platform takes working smarter to a higher level.

yKVWo9CQ_400x400


If you would like to educate yourself in more detail about the information presented in this blog post please visit: Fave Raves: 29 tech pros share their favorite IT products 

Software Defined Networking – 5 best practices

software-defined-networking_sdn

Software Defined Networking, (SDN) provides cost-effective, easily adaptable management of network control and forwarding functions. In simple terms, SDN is the physical separation of the network control plane from the forwarding plane, where a control plane controls multiple devices. Software Defined Networking is an emerging technology and therefore lacks long term examples to be used as a guideline for success. Greg Stemberger, Principal Solutions Architect, has laid out what he has seen in his experience with SDN, creating a five step process for best practices of implementation.

The first step, as it most often it with any new technology employment it to define usage. Bringing in a new technology for your company is only helpful if the technology fits the needs of your organization. Determine the problems your company is facing and proceed to evaluate whether the desired technology will be able to handle and alleviate such problems accordingly. No one technology will be able to solve all your problems. Identify specific problems you believe SDN can fix, specifically just one problem at a time. As Stemberger suggests, “A single use case with tangible, positive results, offers more reliable, measurable outcomes than implementing SDN across your entire network.”

It is crucial to assemble a cross functional team with SDN. Utilizing SDN in the correct manner means having a skilled team with a united approach. A team of well versed members is the best way to manage SDN. You need people who can combine skill sets to work together. Increasing efficiency lets you IT staff spend more of their time on you IT infrastructure rather than operational overhead. Get everyone on the same page, toward a universal goal.

Remember to test in a less critical network area. This is common sense for most. Find a less critical network that you can play with first before moving to your network. This way you avoid uprooting your entire network and facing the wrath of angry coworkers. A small-scale SDN test allows the flexibility to learn and make mistakes.

After testing for a while, make sure to go over the data you gather and review your test case. Did it solve your current problem? Is it a wise investment to expand SDN to the entire network? Do you have the infrastructure ready on both a personnel and technical level?

As a gentle reminder that it’s okay to stay on the cautious side, it is suggested that you gain maturity before expanding deployment.  Rather than diving head first, proceed slowly and make the implementation gradual. Even if the SDN went better than expected in one area of the network, this is not a gurantee that the entire network will function at the same caliber. How will SDN performance change across higher trafficked areas of the network?

These steps are meant to evaluate risks, gain perspective and ensure efficiency. In order to get the most out of Software Defined Networking, it’s best to get all your ducks in a row.


If you would like to educate yourself in more detail about the information presented in this blog post please visit: 5 steps to launching Software Defined Networking

Ransomware

 

Ransomware Malware Ransomware is the devilish and extremely debilitating program designed to lock and encrypt files in order to extort money from consumers, business owners, and even government officials. It seems that no one is safe in the fight against ransomware. Most ransomware programs are targeted at the most popular operating system, Windows. Ransomware programs can and will target other systems such as Android applications, Mac OS X and possibly even smart TVs in the near future. Not only is this an unsettling forecast for consumers, but also a call to action for preventative measures to protect your most important data files.

What can be done? Most users have learned the hard way that it is better to back up sensitive data to an external hard drive. However, this type of malware is tuned in to this. When a ransomware program infiltrates a computer, it infects all accessible drives and shared networks, encrypting all files found. This makes for a very irritating discovery of locked data across the board.

Rather than rely on the external hard drive method for backups, it is suggested that consumers adopt a new best practice. Ensure at least three copies of sensitive data are made, and stored in two different formats. At least one of these copies should be stored off-site or offline. This way if ransomware locks files away consumers are not forced into a sticky situation of deciding whether to risk paying for the data retrieval or losing the data forever.

What to do when faced with ransomware? Not much can be done once ransomware has attacked. Most security researchers advise not paying for files to be unlocked, as there is no guarantee that the hackers will provide the deception key once paid. Security vendors also worry about the implications for fueling the fire. The more consumers give in and pay for the safe return of their data, the further encouraged ransomware criminals become to continue this practice of extortion.

If I haven’t said it enough already, I will say it again. Prevention is key. Know how ransomware reaches your computer. Be especially careful of email attachments, word documents with macro code, and malicious advertisements. Always keep the software on your computer up to date. It is especially important to ensure that OS, browsers such as Flash Player, Adobe Reader, and Java are always updated when available. Unless you have verified the senders, never enable the execution of macros in documents. Finally and most importantly, perform daily activities from a limited user account rather than an administrative one. And always, always, utilize a well running and up to date antivirus program.

If you would like to educate yourself in more detail about material presented in this blog post please visit:

http://www.pcworld.com/article/3041001/security/five-things-you-need-to-know-about-ransomware.html

Free up disk space on a Windows 2008 Server without a system reboot

Disk clean up can be a very useful tool especially in Servers when Disk space is fully utilized.  But what do you do when you are working in a production environment and you can’t reboot the server.  In a pinch, I found the following process to cleanup a congested System Partition.

Copy cleanmgr.exe to System32 folder

A favorable alternative to adding the Desktop Experience feature to your production server, while requiring no system reboot or maintenance window, is simply copying the cleanmgr application files to their appropriate location.

There are two files that need to be copied to the Windows System32 folder.  The only downside to this process is that the disk cleanup utility will not appear in the normal disk drive properties.  However, as these files will be copied to the System32 folder location, the utility can be easily launched from the integrated server search bar.

The two files you are looking for are the cleanmgr.exe.mui and cleanmgr.exe. Below is listing of folder locations that these two files will be located for the most popular version of Server 2008.  After you have found these files, simply copy then to their designated locations as described below.

  • Copy cleanmgr.exe to the System32 folder
  • Copy cleanmgr.exe.mui file to System32en-us

Windows Server 2008 64-bit:

C:Windowswinsxsamd64_microsoft-windows-cleanmgr_31bf3856ad364e35_6.0.6001.18000_none_c962d1e515e94269cleanmgr.exe.mui

C:Windowswinsxsamd64_microsoft-windows-cleanmgr_31bf3856ad364e35_6.0.6001.18000_none_c962d1e515e94269cleanmgr.exe

Windows Server 2008 R2:

C:Windowswinsxsamd64_microsoft-windows-cleanmgr.resources_31bf3856ad364e35_6.1.7600.16385_en-us_b9cb6194b257cc63cleanmgr.exe.mui

C:Windowswinsxsamd64_microsoft-windows-cleanmgr_31bf3856ad364e35_6.1.7600.16385_none_c9392808773cd7dacleanmgr.exe

To run the disk cleanup utility, simply navigate to a command prompt or Run and type cleanmgr.exe

Acronis Backup & Recovery 2012 has Some Serious Issues

Acronis has some issues that need to be addressed; bva has been a serious fan of this product, but in older versions that were rock-solid.  I would have to tell you that we are very disappointed with version 2012 and even worked with them directly and even their most tenured technician had problems getting it to work.  Also to talk candidly, that technician never got it set up correctly.  Bva got a full registered copy of Acronis 2012 True Image.  We installed it on a brand new server with a lot of resources as well as a hefty storage target. This storage target has about 7 TB of usable disk space for back up images.  This was the first time we installed this version and we had issues at every turn.  A summation of the problems are as follows:

When installed, Acronis started making problems including:

  • Incredible slowing the speed of the server. When installed, everything was running very slow, enormously time consuming program launch, everything freezes.
  • Once Acronis was installed via testing, everything runs as fast as before.
  • The server agent refused to load properly and when we got them to install they could not be seen by the back up server(main console).
  • Acronis spent several minutes to show the interface. If the backup is lunched (often this freeze the server).
  • Bva found the restoration process problematic and painful.
  • When we started the recovery from backup, Acronis restarted the server which is ridiculous and would get errors such as “Impossible to read the source. Try the destination again”.

bva then reached out to support and chatted with an Acronis technician, according to a GM of the local office, a Guru where we sat on the phone for 2 hours, reconfigured it from scratch, and they still could not get the back up to work properly.  The biggest issues that we see with this product is as follows:

  • Difficult to install and configure, needs to be a lot easier
  • The indexing of the data takes too long
  • The verification process takes way to long
  • The process associated on multi-level back up’s are very tough.  One job needs to be completed before another starts.  We tried to set up a single job where we backed up the data to drives and then to tape.  This simple process took almost two days for 4TB.

Reliable Back Up and Setting Correct Expectations

Over the last five years I have seen a more passive approach to back up and disaster recovery.  Organizations are letting their data reliability take a back seat to system up-time and performance which is starting to become scary.  I typically ask CEO’s and owners what an acceptable amount of downtime for their business and they all reference about 2 to 4 hours.  It always amazes me, these types of expectations people in power have about how quickly their systems can get back up.  Never taken into account is how long it takes to build their new system as well as the time consuming process of moving data from one location to another.  It is something that is always over-looked in normal system installations.  Many businesses out there feel that their system can be up in 4 to 5 hours and typically when we review and assess a small to medium size business, we find that the average rebuild time for a single server that has a disaster is roughly 10 hours.  Of course the 10 hours for a single server consists of:

  • server build via operating system install and patching
  • application set up and configuration
  • shares/drive set up
  • data migration
  • testing and validation

It is very important to build and structure a network system that can facilitate an agreed level of downtime.  In other words, if management decides that the network can only be down for 4 hours, no matter what time of the day it might be, that will drive a completely different back up system and methodology then if bva is told that 12 hours is satisfactory from 8am to 5pm on weekdays.  Documenting the process and timeline for bring back up the system is critical and imperative.

Many businesses are looking to move their data into the cloud and normally referenced to bva that it is a cheaper alternative to onsite back up, but I can tell you that is not the case.  Moving the data offsite in a reliable and consistent manner can be a bit tricky depending on the solution.  For the solution to thrive, you need a reliable telco provider such as fiber as well as a stable power grid.  Depending on the solution, data roughly can cost $4 to $12 per gigabit (GB) depending on the compliance standard set forth for data retention.  (30 days, 12 months, 5 years, 7 years)  There are several great softwares out there that can be loaded on any server and completely hardware agnostic.  This software drives the back up job and can point it to any iSCSI target. This software can also move the data offsite to any destination you prefer and typically the software you select will provide that option via several data centers.  Microsoft, Google, Amazon, and even Apple are a few that have gotten in this business and will continue to grow and large back up solution providers.

Virtual Desktop Infrastructure (VDI); Session Based Computing

Spring is fully upon us and the summer heat is looming in the not too distant future. Many of us are planning out our summer vacations to beat the heat and spend time with our friends and families. While our minds are probably already off to some beachside locale, there is still a bit of time before we’ll be flying there ourselves. In the meantime, perhaps now is as good a time as any to look into moving your business over to an older and simpler way of computing.  Session based technology has been around for many years and at one point in the late 90’s/early 2000’s it was a very popular desktop architecture.  For a variety of reasons it became less popular primarily due to the desktop hardware cost decreasing significantly.  Session Based computing is where you take all the data and processing activity off the local desktop and have it take place on a robust server.  By doing this you can have multiply desktop sessions running on a single server if you were so inclined.  For best practice methodology, bva recommends putting all sessions spread over two (2) servers to ensure up-time and load balancing for the user community.  The great advantages of Session Based Computing are the following:

  • Smaller Footprint
  • Eco-Friendly and More Green
  • All Data on Servers, No Loss of Data
  • Seamless and Consistent Interface over Different PC’s
  • Ability to Leverage Older PC Hardware for Production
  • Ability to Leverage Newer Operating Systems Virtually Without Conflict
  • Application Virtualization Ensures Seamless User Experience

The most popular products leveraged today for this type of architecture are as followed:

  • Remote Desktop Services (Terminal Server)
  • Citrix Systems
  • Vmware View

Virtual Desktop Infrastructure (VDI) is another name for Session Based Technology. VDI is an emerging architectural model where a Windows client operating system runs in server-based virtual machines (VMs) in the data center and interacts with the user’s client device such as a PC or a thin client. Similar to session virtualization (formerly known as Terminal Services), VDI provides IT with the ability to centralize a user’s desktop; instead of a server session, however, a full client environment is virtualized within a server-based hypervisor. With VDI, the user can get a rich and individualized desktop experience with full administrative control over desktop and applications. However, this architecture, while flexible, requires significantly more server hardware resources than the traditional session virtualization approach.

Key benefits of VDI are:

  • Better enablement of flexible work scenarios, such as work from home and hot-desking
  • Increased data security and compliance
  • Easy and efficient management of the desktop OS and applications

How to Clean Temporary Files for Profiles on Terminal Server

A terminal server was becoming low on disk space. I ran treesize and found that a number of the profiles had alot of temporary files as the culprit, however, how does one clean them all at once without logging on to each one. I found this tool ICsweep. This is a DOS based application which is very simple to use. Just extract to a drive open a DOS prompt and browse to the directory. Then run the icsweep.exe and it will run through all the profiles and clean up the temp files. On this server it freed up over 2.5 Gb of data. Very handy and quick tool to remove unwanted temp files.

BVA Cloud Offering – Virtual Servers – Cloud Servers

For about 2 years now BVA has developed and tested an offering that allows organization the opportunity to move their physical servers off-site into the cloud.  There are several advantages with this type of architecture that really helps business grow and increase their overall up-time and satisfaction.  This offering is important for BVA as well as organization here in the valley.  The days of replacing physical servers are over for the small to medium size businesses.  It’s important to reevaluate your long term strategy so that it falls in line with what is going on in the technology world.  Too many times do we see organization’s continue with the old-line of technology due to inexperience and lack of confidence in change.  This offering that BVA has decided to adopt, truly leverages virtualization in a very robust and redundant infrastructure.  This offering is housed at a local data-center that is a tier 1 facility.  The virtual cluster/farm is roughly 40 physical servers hosts leveraging the VMware virtualization platform.  This offering can provide a public and private cloud solution. Both solutions are very reliable and offer a 100% SLA on hardware up-time which is quite beneficial and worth the cost associated for this type of infrastructure architecture.  This environment has redundant switches, firewalls, power, and bandwidth.  There are three different ISP’s (Internet Service Providers) which allow true redundancy when it comes to back-bone connectivity.

The biggest difference between the two offerings is pretty straight forward:

  • Public Cloud Server – a host environment that is a 40 physical server environment that is not dedicated to your organization
  • Private Cloud Server – a host environment that is a 3 physical server environment that is dedicated to your organization

The cost is really aggressive and I feel its realistic for a small to medium size business looking for advanced IT solutions.  Here is a great example of the cost structure because it is varied depending on the specifications needed via physical server.

  • 2 Processors, 2GB of RAM, 250GB of 15k rpm Disks = $350/Month
  • 4 Processors, 8GB of RAM, 250GB of 15k rpm Disks = $610/Month
  • 4 Processors, 16GB of RAM, 350GB of 15k rpm Disks = $950/Month