I'd say this was invoked by the apps running on it. This VM is used for analytics and reporting, it's got Visual Studio, Power Bi and SQL server running on it. The vendor must've been doing some shit on it.
So, because Veeam reapplied the old snapshot, everything between then and now was lost.
If I keep the old snaphot as current, all the systems have domain trust failure.
I have a backup and there is only a single shared folder on here that will need restored.
Because this site is only 20 Windows computers, and I wanted to stop wasting time after 2 hours, I went brute force.
I disjoined everything from the domain and rejoined them.
With that resolved, I restored the C:\shares folder on the DC from the last Veeam backup.
Site is now working again.
Veeam's replication is now also in the correct state to fall back the replica to the main hyper-v server.
Not fucking touching that tonight.. I R DUN.
Edit: Maybe not the perfectly correct answer, but the answer I went with.
@wrx7m Is that a computer configuration or user configuration policy? Try applying the rules to only non-admins groups.
Yeah, it is at the computer level. I would like to do it via user config but I only want them to apply to users on the RD servers. I need to figure out the proper way to structure AD/GPOs to not screw up everything else.
I am guessing creating another OU as a sub container and move the RD servers into.
Edit: Since it isn't GPP, there isn't any item level targeting, so I can't do it that way.
If you can make those changes directly in the registry, maybe can allow you to use GPP and item level targeting.
If you have two servers and run HA, does that mean that you have to license Windows Server standard for the maximum number of VMs running when you have a failure?
So for example,
Server A: 16 cores, runs 6 VMs normally
Server B: 16 cores, runs 6 VMs normally
So each server has to be licensed for all 12 VMs running on 16 cores - so 6 x Windows Server Standard licenses for each server, total of 12 licenses?
But if you didn't run HA, you would only license each server for 6 VMs, with 3 x Windows Server Standard, a total of 6 licenses?
Is this correct?
If you're running a HA setup of Server Standard, all physical servers must be licensed for all Windows Server VMs that can run on them. This means each physical server in your HA cluster must be licensed for 12 Windows Server VMs.
So yes, you are correct in that to license 12 Windows Server VMs on both of your physical servers, you'll need 6x Windows Server Standard licenses for each server, 12 "licenses" total as you said.
@scottalanmiller Yeah, RDW is the problem I understand now. Basically you can remote desktop without need of App Resources Proxy without any problems. But if you try to load the App Resources from a Windows 10 device then yes I understand now.
Yeah. It's a weird "Essentials only" issue, only on 2012 and older. From what I can tell.
Does NC allow exposure of their "file shares" as smb? If you have users that can't / don't want to use a browser-based access they can always mount it in windows explorer via webdav. Alfresco allows (allowed?) access via both, but the last time I played with it the performance was meh, which I attributed to it being built on java...
You can mount NextCloud into a drive letter or folder using WebDav.
The question does become the aforementioned performance issue (if there is one).
I wonder how file locks are handled when using WebDav?
There are a few topics elsewhere here where file locking and cloud hosting were discussed. You do have to give up what we have all come to appreciate in file locking. Here is a response in one of those other topics I spoke about:
I am aware of that. It's online locking that I am after. Though, I will concede that any locking scheme has to plan for both online and offline. I like sync because of local performance and offline availability, but it really feels like it is best for non shared files. When you add multiple users into the mix, almost everything goes out the window, especially when and if they go offline.
Everything is best for non-shared files :)
SMB shines at "always online, always nearly local" files because it handles offline so poorly. It's a balance. To handle offline or very distant (e.g. high latency) networks well, you have to sacrifice locking.
Yeah good stuff. I have a couple Udemy courses on it.
Something I didn't see in this article was that Cloud Shell has Terraform built in... so you really don't even need to install it. I try to keep things serverless and source controlled, so I wouldn't want to install Terraform.
As long as it uses the latest version of terraform. There are many differences between current and last release.
Cool. Yeah 11 vs 12 are much different. Alot of things have to be redone