Categories
Modern Workplace

Desktop Analytics – the new black

(Originally published on LinkedIn)

On the 16th of October, Microsoft released a new tool called Desktop Analytics where we got quoted, which to me is insane but also proves that we are doing the right things right now.

We have committed to follow the Windows 10 Feature Upgrade schedule of two updates per year, which put high demands on out applications and devices to be ready for this. That is where Desktop Analytics comes into play. This tool provides us with insights around all applications present on our computers and we can identify many known issues before they happen.

By adopting this workflow, we can create more dynamic pilot groups to make sure that we cover as many scenarios as possible before deploying the update to all end-users. This will also help us build a bigger trust in the organization around the Windows 10 feature updates.

Having bigger upgrades of Windows two times per year is a tremendous change from how things have been done in the past, where larger upgrades where released every 3-5 years. Now this happens 2 times per year which comes with a lot of new challenges when we have such a large and complex environment with a lot of older applications which were not designed for Windows 10. However, we are seeing most applications to be working, but this also puts a larger responsibility on the application owners to keep their application up to date and move quick if there is a problem.

We still have things to do around this, but we are getting there and by getting new tools with access to better data will help us take better decisions going forward.

If you haven’t yet read the blogpost from Brad Anderson, you can find it here: https://www.microsoft.com/en-us/microsoft-365/blog/2019/10/16/announcing-general-availability-desktop-analytics/

Categories
Digital Transformation Modern Workplace

Increasing device flexibility

(Originally posted on LinkedIn)

Let’s dig into hardware, since this is an important part of the workplace services.

In the old world, IT centrally basically dictated what computer to buy (you had a handful to choose from) and the ones available probably didn’t really fit your needs but it was the closest you could get.

Okay, not THAT extreme, but I hope you get the point.

Limiting the selection of computers (and a set specification of these) are great in some sense:

  • Standardized range of models
  • No “surprises” for the support team
  • Easy for end-user to pick a device
  • Life cycle management becomes easier
  • Centrally decided which models and specifications to use = no discussion

There is also a bit of a flaw in this setup. There is no room for flexibility and user needs. You will get stuck with something which is what you needed, but not completely.

Let’s start with an example

You have this range of computers to choose from:

  • Computer A – Small lightweight laptop, great for travel but not powerful
  • Computer B – Standard laptop, fairly mobile, fairly powerful.
  • Computer C – Powerful and large workstation, lots of power, lots of memory.
  • Computer D – Executive top model. Pretty powerful and slim design. Expensive.

For a user who travels a lot and needs a powerful computer. Are any of these a good fit?

Taking a new approach

As part of the transition from one hardware vendor to another, we decided to change this approach and offer a broader range an even having models which overlapped. All of them could be specified to the users need. In this context, range means certified for our custom image.

This also meant that we offered a more complex setup, potentially offering about 15 computers towards our end-users. This is where Local IT comes into play for an important part. Creating the custom range for THEIR site. For us, Local IT are the ones providing the user with hardware, which should be fit for purpose for the end-users need.

Just because we centrally offer 15 models doesn’t mean that all 15 should be offered to the end-user on all sites. Most sites actually ended up offering just a few models BUT could get that special machine which just a few users per site needs and the possibility to upgrade the processor, RAM and the hard drive size without making it a non-standard device.

New challenges for central IT

Having this broad offer created new challenges for us as central IT. How do we explain to local IT when to pick what computer, especially when models might overlap? This is something which we hadn’t dealt with before in the same way and this also positioned us in a different place.

We are becoming an enabler rather than a provider.

Positioning us as enablers doesn’t just apply for hardware, this could be said about a lot of our new services. But this is where we need to go since we operate on business demands and not on what we think is interesting. We enable the business to succeed and to do that we need to understand and meet their demands. Once again, understanding each local business need is very hard as a central organization and we need the local IT staff to help the user to navigate the jungle we are creating by adopting a more flexible environment where we no longer dictate what devices can be used.

The conclusion

So how do we tackle this? We have only found one effective way and that is information. Information about the services and information about the hardware so that a good decision can be made as close to the end-user as possible.

However, we are not making things easier for ourselves right now. We are about to enable Windows and Mac managed from Intune. How should we position that and why should one be picked over the other or the traditional custom Windows PC? We are working hard on creating good service descriptions right now to assist in making this decision together with the end-user. Defining what you can do, but also what you cannot do, with each service becomes increasingly important to make this decision.

Since the modern workplace puts more focus on the user, the approach to what device the end-user consumes the services on must change. We cannot be a “Windows only” environment anymore. Different people have diverse needs and if we want to keep being an attractive employer, what device you can use is not something IT can afford dictate. You need to meet the end-user on their grounds and provide tools they are comfortable and used to work with since they will bring their own work style.

Today we are doing this shift with our devices. Who knows, tomorrow it might be the applications.

Categories
Modern Workplace

Moving to modern management

(Originally published on LinkedIn)

I guess by now, most people are back from summer holidays (at least in Sweden) and I always feel that the much-needed summer break acts as a reboot both for motivation and ideas.

This fall will contain a lot of exciting things happening at once. We have a lot of exiting happening. The one that I´m most excited about is introducing Windows Autopilot and an Intune managed PC. This is a TREMENDOUS change for us, and this is probably “part 1”.

Traditionally, we have for the last 20 years or so we have managed computers, in the same way, using on-premises server infrastructure and creating our “own” Windows version. This has gone through several different generations; we are currently on our “generation 4” which is based on Windows 10. We manage these custom images using Config Manager and a bunch of group policies.

That’s how we have “always” done it and we are comfortable doing so.

But what happens when things are moving to the cloud and we change our work habits?

We don´t have the same work style today as we did back when Windows XP was released, not even Windows 8.1. The world has changed, and it keeps on changing. We are moving to consume things as a service and our “office” might not be on the corporate network all the time. Does it make sense to use a client heavily dependent (and designed for) on-premises infrastructure?

After a lot of preparations, we will this fall start testing how we can utilize Intune to manage PCs and enrolling them through Windows Autopilot.

This is truly exciting and a big shift for us, moving from very old-school and wanting to manage everything to more of a light-touch approach where we manage what’s needed to keep the device and information secure.

“Does this setting add any value?”

Coming from an old-school setup we have A LOT of policies and preferences configured. Some makes sense, some are old left-over which never got removed and some are obsolete. We have even found some XP setting which are still there but doesn’t get applied. So how do we decide what to keep?

We did inventory all settings a typical PC has in our environment and did somewhat of an identification of what GPO’s correlates to MDM-policies. But not all these settings make sense in a new world where we want light touch.

Our working thesis has been: “Does this setting add any value?”. By asking us that question, we are trying to avoid configuring things just because there is a setting for it. This has left us with a more relevant configuration. We removed a lot, but also kept a whole lot of settings. So not all our “legacy” settings were irrelevant.

Innovating for all users – lead by the few

In our very first “version” of a modern managed Windows computer, we are leaving ALL on-prem things behind. No co-management, no hybrid-join, no file shares. It’s a clean cut.

However, we still have a lot of things that many users would need which resides on-prem making this new platform not fit for all scenarios at this point. But that was not what we were going for. This will be a cutting-edge platform targeted for those users who can and are willing to break free from the old environment and are using mostly cloud based applications.

However, our objective is to use the learning from this modern platform to improve on our standard platform, helping driving innovation for all our users!

Cutting lead time

One massive thing this will also mean for our end-user is shorter lead times. When setting up a new computer, even if we utilize White glove so that local IT can put their touch on the computer to provide that little extra service only, they can do.

Today, imaging takes from 1,5 hour up to 3 hours for our image (taken into consideration that not all sites has superb internet connection). If we can reduce this down, this means that our users could potentially receive their computer much faster, even if there is a hands on step by a local IT technician if the end-user is not comfortable doing the enrollment them self. Our infrastructure might not be mature yet for full coverage, but we can start on the bigger sites without any issue.

Where are we right now?

Right now, we are in an early pilot phase were we are identifying the last things before we can let some real user try this (we are basically 4-5 people running a cloud managed PC). It’s still limited to a “cloud only” environment without any connection to Config Manager or other on-prem systems, so it will not be for everyone at this stage. But this will help us find the road forward to our next generation workplace.

Categories
Digital Transformation Modern Workplace

Staying current in the new world

(Originally published on LinkedIn)

In this post, I´ll keep covering our digital transformation. If you haven’t read the previous part, you can the first part here and the second here. This is the story of how we left a legacy workplace in 2018 and started to build for the future.

One thing I’ve noticed that you often come across when you working bigger changes, and especially moving to new technology, is variations of the phrase “yeah we don´t do it like that here, it would never work”.

If you have never tried it and you don’t really know what it is/means, how can you be so sure that it will not work?

I quite often play the “hey I´m a millennial”-card when discussing change (it works surprisingly well), especially when I talk about things that might be a bit naive and oversimplified. But it´s an effective way to push forward and skip over some of those road bumps which you tend to get stuck on.

We now live in a world which is ever changing when it comes to the workplace. You can update the Office suite every month and Windows feature updates are released every six months. This is quite different from the past.

So how did we decide to navigate this?

The first step we took was to accept that this is what the world looks like now. No matter how much we complain by the coffee machine, this is the reality now.

The second step is to sell this to the organization, especially key stakeholders such as application owners and senior management. This is the tricky part since this is not so much technology as politics.

Instead of seeing each upgrade as a project itself, we built a process to support this flow of an evergreen world. This means that once we have finished the last step in the process, it’s time to start over again. Our process contains the following steps (imagine this as a circle):

  1. Inform stakeholders that new release is coming in 2-3 weeks.
  2. Release update to first evaluation group (ring 0) to clear any compatibility issues in the environment.
  3. Release update to second evaluation group (ring 1) which contains application testers for business-critical applications, to give them as much time as possible to evaluate.
  4. Release update to third evaluation group (ring 2) which contains application testers for important business applications which are not deemed critical but still would like to evaluate on an early stage.
  5. Release update to the first pilot group for broad deployment (ring 3) to make sure that deployment works on a global scale. This step is estimated to happen 2-3 month after the Windows 10 feature upgrade is released, but it also depends on the outcome of the previous steps.
  6. Release update to broad production (ring 4).

During this entire process, we are monitoring the deployments and keeping track that nothing breaks. If an application is identified as problematic, the computers can simply be rolled back to the previous version of Windows 10 and that application will be put on an exclusion list (basically be put in ring 5) until the application owner has taken action on the application. This has however not yet happened.

Does this process work in the real world?

Yes. We ran through this but at a slightly higher pace when moving from Windows 10 1709/1803 to Windows 10 1809. To our knowledge, we did not have any major incidents where we broke an end user’s computer. We upgraded roughly 18 000 computers in a matter of a few weeks.

We did have errors though, and a lot of them during the first week. But all errors were indicating that users were not able to run the upgrade (it was blocked). This was also expected based on the earlier test we had run with the earlier rings, but nothing we couldn’t handle. Everyone was confident in the servicing, and all errors were either “solved by them self” or fixed by our technicians in bulk or case by case.

After our first major Windows as a Service experience, we still trust the servicing. We were even more confident after the upgrade that the Windows as a Service process works.

BUT, having static rings as we do today is far from ideal. Until we have better tools (such as Microsoft Desktop Analytics) to create dynamic rings, this is our approach. We will spend some time fine-tuning the setup and move to dynamic rings once we have the tools.

The outcome

  • Users had the update as available for 21 days, after that the installation was mandatory
  • We upgraded roughly 18 000 computers in about a month
  • No major application compatibility issues
  • Branch Cache took about 50-60% of the workload
  • No reported network disturbances during this time caused by SCCM

Bonus learning

One thing we realized quite early on was that the phrase “application testing” scares people, especially management. Testing is expensive and time-consuming is a general feeling and causes unwanted friction when you want to speed up the pace. Therefore, we decided to rephrase it. We were not aiming to do “application testing” in ring 1 and 2, we are aiming to do “application verification“. This minor change in the wording changed the dialogue a lot and people became less scared of the flow we set up. Verification is less scary then testing.

Categories
Digital Transformation Modern Workplace

Deploying the future

(Originally published on LinkedIn)

This is the second part of a series about the digital transformation journey we are doing at Sandvik. You can find the first part here, Leaving legacy in 2018.

When I joined Sandvik back in 2017 we were right in the middle of upgrading our Configuration Manager environment from SCCM 2007 to SCCM Current Branch. This was a huge project in which we invested a lot of money and time into with our delivery partner.

We finally pulled through. Everyone involved in the project did a huge effort to get us there, from the SCCM delivery team/technicians to local IT. This was our first step towards the future for our clients and this meant we could start working on Windows 10.

Configuration Manager and deploying applications were however still somewhat of a struggle for us. Every other time we did a large deployment we had to deploy in waves, spend a lot of time and effort into not “killing” the slower sites which often meant deploying on weird hours and asking users to leave their machines on during the night at the office. It happened more than one time that we had to pull the plug on deployments since we were consuming all the bandwidth in the network for some sites, even the bigger ones. We did have a peer-to-peer solution, but it was not spread out to all sites and machines.

We had to fix this.

Since we had moved to SCCM CB a lot of new opportunists opened up (maybe not from day one though) which meant that we actually had tools in our toolbox to solve this in a new way, such as Branch Cache and Peer Cache (which in them self are not new functions).

We decided to start with Branch Cache since our biggest problem was application distribution. We piloted the Branch Cache at a few sites to see if we actually could gain something from this, and the results were really promising so we started deploying this throughout our whole environment, starting with the most prioritized sites without local distribution points and then over to all sites. When Branch Cache was widely deployed, we scaled down our 1E Nomad solution and eventually removed it.

We managed to do the following bigger things without causing network interference and seeing Branch Cache being utilized.

  • Deploy Office 365 ProPlus update to > 25 000 computers
  • Deploy Windows 10 feature update to > 18 000 computers

And then we had the one we are most proud of to date. We deployed Teams to > 25 000 users, with utilization in Branch Cache of 70%. This is our best number so far for applications, and then we are not yet using phased deployments in Config Manager.

Our next step right now is to get Peer Cache out on a few sites, especially sites with bad connections to the closest distribution point. The reason we want to get Peer Cache out in the environment is to ease PxE installation on our smaller/remote sites. In parallel to this, we are also investigating how we could utilize LEDBAT for the traffic between our SCCM servers. This, however, requires that our SCCM servers are running at least Windows Server 2016 and we are not completely there yet. But there is still a lot of time left during 2019!

The take away from this

The biggest takeaway, Branch Cache works, and it works really well. If you have not yet started to investigate Branch Cache, I would advise you to do so. This has saved us a lot of headache and time since we can now deploy with great confidence that we will not disturb our critical business systems with our traffic which might not be as critical. The fact that we have managed to reduce the WAN traffic with up to 70% for larger deployments has improved the trust of other teams that we can deploy things in a disturbance-free way.

I also want to point out that our team of technicians and architects has done tremendous work making this possible.