Categories
Intune Modern Workplace

Why should you care about your phones?

(Originally published on LinkedIn)

By now you have gone through several generations of different practices on how and why to manage your computers, through a Microsoft product such as #ConfigMgr or a third-party product like SpecOps. For Windows, managing the device is a standard procedure and most larger organizations have some sort of management.

But what about your mobile devices such as your iPhones, iPads, and Samsung phones? Are those managed?

Why should you manage your mobile devices?

There are a lot of arguments why you should manage your mobile devices such as keeping an inventory, security, and ease of use.

But why should you care? What’s in it for you?

Knowing what devices you have in your organisation, who has them and if they are used are a few things that are increasingly important in a cloud-centric world. Devices are no longer only living on the corporate network, and the mobile devices never even made it there.

Adding management to your mobile devices can provide you with many benefits:

  • You can keep track of what devices are used by whom
  • You can utilize a mobile device as a factor in multi authentication scenarios
  • Ease the access to corporate data for your end-users
  • Distribute software and settings (much like on Windows), making the user experience smoother.
  • Ensure that your corporate data is safe

There are several other arguments for this as well.

But to keep it short. You will gain control of what devices are used, by whom, in your organization. These devices are also most likely accessing corporate data, and it’s a clever idea to manage data on these devices (to minimize incidents).

What’s in it for the user?

So why would your users care about if their device is managed or not?

A lot has happened since the iPhone was introduced back in 2007. The services available, the threat level, user behaviour and more. We have also gained a lot of possibilities during the last couple of years when it comes to mobile device management. There are constantly new settings being available to manage to make the end-user onboarding better. We can define email account, deploy corporate Wi-Fi credentials, install business-related apps and much more. But we can also enforce security measurements such as PIN-code and encryption.

Lately, we are also able to set trust to a device, by registering it in Azure AD and by doing that claiming it to be trusted and not enforcing MFA each time it the end-user is trying to access the corporate sphere. Doing this will increase the user experience and at the same time ensure that you obtain a higher level of security since you know what device your data is accessed from.

One other important thing in this for the end-user is that you can now remotely assist the user in case they lose their device PIN or need some other help. For some platforms, there are even remote tools through e.g. TeamViewer so that your support team can see what the user is seeing.

So why should you care?

Since the behaviour of the workforce is changing. The term “mobile-first” isn’t applicable anymore, but if you look at what devices people are using, they spend a lot of time with their smartphones. So why wouldn’t you secure this device and make it member of your IT environment? There is a lot of hidden potentials here, where you can provide a valuable experience throughout the whole life cycle of the device (from onboarding to decommissioning).

Especially if you look at the younger generations of your workforce, they are more heavily dependent on their mobile device and if you are not on top of this on an early stage you will have a lot of catching up to do.

And just to be clear, I’m not suggesting that you manage your mobile devices as you do with your on-prem computers. Adopt to what the mobile device management world looks like and protect the right things (data and identity), having the device locked down and not useful from an end-user point of view will only make your end-users find ways around it and you are back to square one.

What are your thoughts on this? Leave a comment!

Categories
Modern Workplace

Evergreen – the road to stay current

(Originally published on LinkedIn)

I´ve touched on this in an earlier article, but it’s worth coming back to.

When we talk about Evergreen, we often get stuck in talking about Microsoft products (Office, Windows, Config Manager), but “Evergreen” is larger than that.

Keeping applications up to date is a challenge we struggle with like everyone else. There is basically always a newer version of our VPN client at any given time and the one we have in production does not support the latest Windows 10 feature release (this has for real been the case since we got started with servicing Windows). This is not the only one, there are several other examples of applications which are hard to keep up with.

You might argue that we don’t need EVERY version of our VPN client, and that is true. We need the one compatible with our back end and the latest Windows version.

But there are other applications which are working in the Evergreen context.

In our IT environment, we have several other applications which have a lifecycle much like Windows or Office, but sometimes with an even higher pace.

Two examples of these are Google Chrome and Adobe Creative Cloud. However, we don’t give them close to as much love as we do to the Microsoft application even though many have a crazy high penetration of Google Chrome usage without it being the default browser. Google updates Chrome every 6 weeks, that’s about 8-9 times a year. So, wanting to keep up with this and testing every release is a huge effort.

One could also argue that a lot of web-based services are also evergreen, since they are constantly updated, a little bit at the time. Sometime smaller changes, sometimes bigger (like when Facebook changed their design a few years back and everyone went crazy). But taking this to a desktop world is where the new challenges lays for the corporate world.

This is a vast area of improvement, realizing that the Evergreen spans outside the soft and cosy Microsoft bubble.

My point is not to make a big complex process for every little application, but to take the evergreen concept with a bit more ease since the idea around this is not new, it’s been around quite some time for at least browsers.

This might be a little bit over simplified, but for many of the applications you don’t need a big testing process for every update of e.g. the Windows 10 version or Office 365 release. Of course, for business-critical applications and applications with a lot of customizations/integrations, this is a good idea, but that can’t be most of your applications. By optimizing and prioritizing what applications you need to do application testing for, you will minimize the effort in moving between versions in an evergreen world. Think of it as application verification rather than application testing, since you want to make sure the application still works (which it most likely will).

We could also twist it a bit. Your users are using a smartphone, let’s say an iPhone. Apps for that iPhone which comes from the store are updated on a regular basis, and you don’t really control when Microsoft wants you to update Outlook to a later version on the phone. But it still works even after being updated. Of course, there aren’t as many integrations toward mobile apps as for desktop apps, but I want to highlight the mindset in this.

However, this also puts a great demands on the ISV, and you need to put more clear demands on your ISV’s to commit to this process when discussing and dealing with line-of-business applications.

The world has changed, and we need to adapt to this, even if we think it’s scary and will give us a lot of extra work.

And to loop back to a previous post again, to navigate the evergreen jungle, Desktop Analytics should definitely be your best friend in this since it can provide you really good insights about applications, drivers and much more!

I hope this article inspired you to start looking into how you can get moving with the Evergreen concept within your organization, and feel free to leave a comment or send me a DM if you want to discuss this further!

Categories
Digital Transformation Modern Workplace

Dare to break old habits in 2020

(Originally published on LinkedIn)

We all love email, don’t we? It’s such a fast and efficient way to communicate. You can just write your short message in the subject line and the person you send it to will see straight away what you wanted to ask…

Okay, there might be some irony in that part.

Emails are great, but not in communicating “one too few” in 2020, there are so many other great tools. We also have a new generation of workers showing up which don’t really get the whole email thing. We also have this whole thing with crowded inboxes. I’ve met people who have over 10 000 unread emails, and I bet you have too, so how would your email even be found or noticed in that case?

So, what can we use instead?

What if there were a tool which is based on chat, much like text messaging. Were you could easily share documents and you keep all conversation history? Oh, and group chats to include more people would be awesome!

In fact, there are several tools which does this such as Microsoft Teams, Slack or Google Hangouts. But since I’m a strong Microsoft advocate, I’ll focus this article on the Microsoft product Teams.

What is Teams?

There is a lot of buzz around Teams, and have been for quite some time now and if you are not looking into it yet it’s time to get started since Skype for Business is going end of life in 2021.

But what is Teams and how can you make use of it?

Teams is a collaboration platform in the aspect of “one to one – one to few – one to many”, and keeping it focused to your team (virtual or organizational) and not your complete organization, but of course based on size and such. Teams is not a new social intranet; this is where Yammer comes into play if we speak Microsoft terms.

Teams is heavily centralized around conversations and collaboration in different context. Conversations can either be private in chats or more public in a team where everyone in the team can participate (private channels are coming as well as presented at Ignite during Q1 of 2020).

Collaboration can also take different shapes and forms in Teams. But to set the expectations right, Teams is based on SharePoint Online and shares the same access principles and collaboration feature set as SharePoint Online.

Teams shouldn’t be looked upon as “yet another place” to look for news and updates, it should be considered as the hub where you keep track of things. The more conversation you move to Teams from especially email, the easier the transition will be. Also, this is your one stop shop for calls, meetings and chats which means this should be a part of your daily workflow!

And yes, Teams is so much more than what I just wrote. But it’s an easy place to start and an effective way in to using the platform!

So why should you care?

Even if we all love sending email, it’s not an efficient way of communicating since we all know that feeling after a few days of and you have 200 new emails where most of it is “for your knowledge” or just irrelevant. There is also a significant risk that you miss something important and you will need at least a day to go through it all.

Teams can help you gain more transparency and faster collaboration. You also get the benefit of traceability of all discussions you have had either in personal chats or larger forums, and its SEARCHABLE.

Looking at the trend and buzz around Teams, it’s here to stay and is a more modern way to communicate. Emails will still have its place in the world, but not as we use it today. There is also a whole new generation out there who doesn’t really understand why one would use email to communicate since it’s not efficient.

Let’s break the old habit in 2020 and send less email and more instant messages! It doesn’t have to be Teams since this is more a behaviour than a product. I promise you, both you and your users will find it more pleasing to get less emails!

Categories
Modern Workplace

Desktop Analytics – the new black

(Originally published on LinkedIn)

On the 16th of October, Microsoft released a new tool called Desktop Analytics where we got quoted, which to me is insane but also proves that we are doing the right things right now.

We have committed to follow the Windows 10 Feature Upgrade schedule of two updates per year, which put high demands on out applications and devices to be ready for this. That is where Desktop Analytics comes into play. This tool provides us with insights around all applications present on our computers and we can identify many known issues before they happen.

By adopting this workflow, we can create more dynamic pilot groups to make sure that we cover as many scenarios as possible before deploying the update to all end-users. This will also help us build a bigger trust in the organization around the Windows 10 feature updates.

Having bigger upgrades of Windows two times per year is a tremendous change from how things have been done in the past, where larger upgrades where released every 3-5 years. Now this happens 2 times per year which comes with a lot of new challenges when we have such a large and complex environment with a lot of older applications which were not designed for Windows 10. However, we are seeing most applications to be working, but this also puts a larger responsibility on the application owners to keep their application up to date and move quick if there is a problem.

We still have things to do around this, but we are getting there and by getting new tools with access to better data will help us take better decisions going forward.

If you haven’t yet read the blogpost from Brad Anderson, you can find it here: https://www.microsoft.com/en-us/microsoft-365/blog/2019/10/16/announcing-general-availability-desktop-analytics/

Categories
Digital Transformation Modern Workplace

Increasing device flexibility

(Originally posted on LinkedIn)

Let’s dig into hardware, since this is an important part of the workplace services.

In the old world, IT centrally basically dictated what computer to buy (you had a handful to choose from) and the ones available probably didn’t really fit your needs but it was the closest you could get.

Okay, not THAT extreme, but I hope you get the point.

Limiting the selection of computers (and a set specification of these) are great in some sense:

  • Standardized range of models
  • No “surprises” for the support team
  • Easy for end-user to pick a device
  • Life cycle management becomes easier
  • Centrally decided which models and specifications to use = no discussion

There is also a bit of a flaw in this setup. There is no room for flexibility and user needs. You will get stuck with something which is what you needed, but not completely.

Let’s start with an example

You have this range of computers to choose from:

  • Computer A – Small lightweight laptop, great for travel but not powerful
  • Computer B – Standard laptop, fairly mobile, fairly powerful.
  • Computer C – Powerful and large workstation, lots of power, lots of memory.
  • Computer D – Executive top model. Pretty powerful and slim design. Expensive.

For a user who travels a lot and needs a powerful computer. Are any of these a good fit?

Taking a new approach

As part of the transition from one hardware vendor to another, we decided to change this approach and offer a broader range an even having models which overlapped. All of them could be specified to the users need. In this context, range means certified for our custom image.

This also meant that we offered a more complex setup, potentially offering about 15 computers towards our end-users. This is where Local IT comes into play for an important part. Creating the custom range for THEIR site. For us, Local IT are the ones providing the user with hardware, which should be fit for purpose for the end-users need.

Just because we centrally offer 15 models doesn’t mean that all 15 should be offered to the end-user on all sites. Most sites actually ended up offering just a few models BUT could get that special machine which just a few users per site needs and the possibility to upgrade the processor, RAM and the hard drive size without making it a non-standard device.

New challenges for central IT

Having this broad offer created new challenges for us as central IT. How do we explain to local IT when to pick what computer, especially when models might overlap? This is something which we hadn’t dealt with before in the same way and this also positioned us in a different place.

We are becoming an enabler rather than a provider.

Positioning us as enablers doesn’t just apply for hardware, this could be said about a lot of our new services. But this is where we need to go since we operate on business demands and not on what we think is interesting. We enable the business to succeed and to do that we need to understand and meet their demands. Once again, understanding each local business need is very hard as a central organization and we need the local IT staff to help the user to navigate the jungle we are creating by adopting a more flexible environment where we no longer dictate what devices can be used.

The conclusion

So how do we tackle this? We have only found one effective way and that is information. Information about the services and information about the hardware so that a good decision can be made as close to the end-user as possible.

However, we are not making things easier for ourselves right now. We are about to enable Windows and Mac managed from Intune. How should we position that and why should one be picked over the other or the traditional custom Windows PC? We are working hard on creating good service descriptions right now to assist in making this decision together with the end-user. Defining what you can do, but also what you cannot do, with each service becomes increasingly important to make this decision.

Since the modern workplace puts more focus on the user, the approach to what device the end-user consumes the services on must change. We cannot be a “Windows only” environment anymore. Different people have diverse needs and if we want to keep being an attractive employer, what device you can use is not something IT can afford dictate. You need to meet the end-user on their grounds and provide tools they are comfortable and used to work with since they will bring their own work style.

Today we are doing this shift with our devices. Who knows, tomorrow it might be the applications.

Categories
Modern Workplace

Moving to modern management

(Originally published on LinkedIn)

I guess by now, most people are back from summer holidays (at least in Sweden) and I always feel that the much-needed summer break acts as a reboot both for motivation and ideas.

This fall will contain a lot of exciting things happening at once. We have a lot of exiting happening. The one that I´m most excited about is introducing Windows Autopilot and an Intune managed PC. This is a TREMENDOUS change for us, and this is probably “part 1”.

Traditionally, we have for the last 20 years or so we have managed computers, in the same way, using on-premises server infrastructure and creating our “own” Windows version. This has gone through several different generations; we are currently on our “generation 4” which is based on Windows 10. We manage these custom images using Config Manager and a bunch of group policies.

That’s how we have “always” done it and we are comfortable doing so.

But what happens when things are moving to the cloud and we change our work habits?

We don´t have the same work style today as we did back when Windows XP was released, not even Windows 8.1. The world has changed, and it keeps on changing. We are moving to consume things as a service and our “office” might not be on the corporate network all the time. Does it make sense to use a client heavily dependent (and designed for) on-premises infrastructure?

After a lot of preparations, we will this fall start testing how we can utilize Intune to manage PCs and enrolling them through Windows Autopilot.

This is truly exciting and a big shift for us, moving from very old-school and wanting to manage everything to more of a light-touch approach where we manage what’s needed to keep the device and information secure.

“Does this setting add any value?”

Coming from an old-school setup we have A LOT of policies and preferences configured. Some makes sense, some are old left-over which never got removed and some are obsolete. We have even found some XP setting which are still there but doesn’t get applied. So how do we decide what to keep?

We did inventory all settings a typical PC has in our environment and did somewhat of an identification of what GPO’s correlates to MDM-policies. But not all these settings make sense in a new world where we want light touch.

Our working thesis has been: “Does this setting add any value?”. By asking us that question, we are trying to avoid configuring things just because there is a setting for it. This has left us with a more relevant configuration. We removed a lot, but also kept a whole lot of settings. So not all our “legacy” settings were irrelevant.

Innovating for all users – lead by the few

In our very first “version” of a modern managed Windows computer, we are leaving ALL on-prem things behind. No co-management, no hybrid-join, no file shares. It’s a clean cut.

However, we still have a lot of things that many users would need which resides on-prem making this new platform not fit for all scenarios at this point. But that was not what we were going for. This will be a cutting-edge platform targeted for those users who can and are willing to break free from the old environment and are using mostly cloud based applications.

However, our objective is to use the learning from this modern platform to improve on our standard platform, helping driving innovation for all our users!

Cutting lead time

One massive thing this will also mean for our end-user is shorter lead times. When setting up a new computer, even if we utilize White glove so that local IT can put their touch on the computer to provide that little extra service only, they can do.

Today, imaging takes from 1,5 hour up to 3 hours for our image (taken into consideration that not all sites has superb internet connection). If we can reduce this down, this means that our users could potentially receive their computer much faster, even if there is a hands on step by a local IT technician if the end-user is not comfortable doing the enrollment them self. Our infrastructure might not be mature yet for full coverage, but we can start on the bigger sites without any issue.

Where are we right now?

Right now, we are in an early pilot phase were we are identifying the last things before we can let some real user try this (we are basically 4-5 people running a cloud managed PC). It’s still limited to a “cloud only” environment without any connection to Config Manager or other on-prem systems, so it will not be for everyone at this stage. But this will help us find the road forward to our next generation workplace.

Categories
Digital Transformation Modern Workplace

Staying current in the new world

(Originally published on LinkedIn)

In this post, I´ll keep covering our digital transformation. If you haven’t read the previous part, you can the first part here and the second here. This is the story of how we left a legacy workplace in 2018 and started to build for the future.

One thing I’ve noticed that you often come across when you working bigger changes, and especially moving to new technology, is variations of the phrase “yeah we don´t do it like that here, it would never work”.

If you have never tried it and you don’t really know what it is/means, how can you be so sure that it will not work?

I quite often play the “hey I´m a millennial”-card when discussing change (it works surprisingly well), especially when I talk about things that might be a bit naive and oversimplified. But it´s an effective way to push forward and skip over some of those road bumps which you tend to get stuck on.

We now live in a world which is ever changing when it comes to the workplace. You can update the Office suite every month and Windows feature updates are released every six months. This is quite different from the past.

So how did we decide to navigate this?

The first step we took was to accept that this is what the world looks like now. No matter how much we complain by the coffee machine, this is the reality now.

The second step is to sell this to the organization, especially key stakeholders such as application owners and senior management. This is the tricky part since this is not so much technology as politics.

Instead of seeing each upgrade as a project itself, we built a process to support this flow of an evergreen world. This means that once we have finished the last step in the process, it’s time to start over again. Our process contains the following steps (imagine this as a circle):

  1. Inform stakeholders that new release is coming in 2-3 weeks.
  2. Release update to first evaluation group (ring 0) to clear any compatibility issues in the environment.
  3. Release update to second evaluation group (ring 1) which contains application testers for business-critical applications, to give them as much time as possible to evaluate.
  4. Release update to third evaluation group (ring 2) which contains application testers for important business applications which are not deemed critical but still would like to evaluate on an early stage.
  5. Release update to the first pilot group for broad deployment (ring 3) to make sure that deployment works on a global scale. This step is estimated to happen 2-3 month after the Windows 10 feature upgrade is released, but it also depends on the outcome of the previous steps.
  6. Release update to broad production (ring 4).

During this entire process, we are monitoring the deployments and keeping track that nothing breaks. If an application is identified as problematic, the computers can simply be rolled back to the previous version of Windows 10 and that application will be put on an exclusion list (basically be put in ring 5) until the application owner has taken action on the application. This has however not yet happened.

Does this process work in the real world?

Yes. We ran through this but at a slightly higher pace when moving from Windows 10 1709/1803 to Windows 10 1809. To our knowledge, we did not have any major incidents where we broke an end user’s computer. We upgraded roughly 18 000 computers in a matter of a few weeks.

We did have errors though, and a lot of them during the first week. But all errors were indicating that users were not able to run the upgrade (it was blocked). This was also expected based on the earlier test we had run with the earlier rings, but nothing we couldn’t handle. Everyone was confident in the servicing, and all errors were either “solved by them self” or fixed by our technicians in bulk or case by case.

After our first major Windows as a Service experience, we still trust the servicing. We were even more confident after the upgrade that the Windows as a Service process works.

BUT, having static rings as we do today is far from ideal. Until we have better tools (such as Microsoft Desktop Analytics) to create dynamic rings, this is our approach. We will spend some time fine-tuning the setup and move to dynamic rings once we have the tools.

The outcome

  • Users had the update as available for 21 days, after that the installation was mandatory
  • We upgraded roughly 18 000 computers in about a month
  • No major application compatibility issues
  • Branch Cache took about 50-60% of the workload
  • No reported network disturbances during this time caused by SCCM

Bonus learning

One thing we realized quite early on was that the phrase “application testing” scares people, especially management. Testing is expensive and time-consuming is a general feeling and causes unwanted friction when you want to speed up the pace. Therefore, we decided to rephrase it. We were not aiming to do “application testing” in ring 1 and 2, we are aiming to do “application verification“. This minor change in the wording changed the dialogue a lot and people became less scared of the flow we set up. Verification is less scary then testing.

Categories
Digital Transformation Modern Workplace

Deploying the future

(Originally published on LinkedIn)

This is the second part of a series about the digital transformation journey we are doing at Sandvik. You can find the first part here, Leaving legacy in 2018.

When I joined Sandvik back in 2017 we were right in the middle of upgrading our Configuration Manager environment from SCCM 2007 to SCCM Current Branch. This was a huge project in which we invested a lot of money and time into with our delivery partner.

We finally pulled through. Everyone involved in the project did a huge effort to get us there, from the SCCM delivery team/technicians to local IT. This was our first step towards the future for our clients and this meant we could start working on Windows 10.

Configuration Manager and deploying applications were however still somewhat of a struggle for us. Every other time we did a large deployment we had to deploy in waves, spend a lot of time and effort into not “killing” the slower sites which often meant deploying on weird hours and asking users to leave their machines on during the night at the office. It happened more than one time that we had to pull the plug on deployments since we were consuming all the bandwidth in the network for some sites, even the bigger ones. We did have a peer-to-peer solution, but it was not spread out to all sites and machines.

We had to fix this.

Since we had moved to SCCM CB a lot of new opportunists opened up (maybe not from day one though) which meant that we actually had tools in our toolbox to solve this in a new way, such as Branch Cache and Peer Cache (which in them self are not new functions).

We decided to start with Branch Cache since our biggest problem was application distribution. We piloted the Branch Cache at a few sites to see if we actually could gain something from this, and the results were really promising so we started deploying this throughout our whole environment, starting with the most prioritized sites without local distribution points and then over to all sites. When Branch Cache was widely deployed, we scaled down our 1E Nomad solution and eventually removed it.

We managed to do the following bigger things without causing network interference and seeing Branch Cache being utilized.

  • Deploy Office 365 ProPlus update to > 25 000 computers
  • Deploy Windows 10 feature update to > 18 000 computers

And then we had the one we are most proud of to date. We deployed Teams to > 25 000 users, with utilization in Branch Cache of 70%. This is our best number so far for applications, and then we are not yet using phased deployments in Config Manager.

Our next step right now is to get Peer Cache out on a few sites, especially sites with bad connections to the closest distribution point. The reason we want to get Peer Cache out in the environment is to ease PxE installation on our smaller/remote sites. In parallel to this, we are also investigating how we could utilize LEDBAT for the traffic between our SCCM servers. This, however, requires that our SCCM servers are running at least Windows Server 2016 and we are not completely there yet. But there is still a lot of time left during 2019!

The take away from this

The biggest takeaway, Branch Cache works, and it works really well. If you have not yet started to investigate Branch Cache, I would advise you to do so. This has saved us a lot of headache and time since we can now deploy with great confidence that we will not disturb our critical business systems with our traffic which might not be as critical. The fact that we have managed to reduce the WAN traffic with up to 70% for larger deployments has improved the trust of other teams that we can deploy things in a disturbance-free way.

I also want to point out that our team of technicians and architects has done tremendous work making this possible.