So I work for a company with many 100s of terabytes of files on our file servers (if I counted our SQL, Workstation and Avid environments I would potentially weep but ignorance is not bliss and due to our Mac and Avid environment I have worked on this solution), so every once in a while we need to archive off recent jobs (a successful campaign for example).
Because of this I wrote a dirty powershell script (a Linux shell script is available which I wrote for a previous employer but I am appeasing the masses for this one) where the script looks for a folder naming convention (in this case ARC_ at the beginning) and moves this data to the archive location. The below will run on a local file server ( I have it deployed as a file server role in a Windows Cluster Environment but an easy rewrite to meet UNC paths instead of local drives is an easy task).
######
#
# A little script to archive all folders with "ARC_" at the beginning of the folder to a new location
#
# USAGE:
#
# Specify the directory to search in using the sourceDir (source folder) and targetDir (target Directory)you want to copy to. Make sure they are in quotes, especially if they have spaces
# For example:
# 'C:\TEMP\Mikes Porn' - is GOOD!
# C:\TEMP\Mikes Porn - is BAD!
#
# By Mike Donaldson (tekctrl@gmail.com)
#
######
#Specifiy the source and target folders
$Client = Read-Host "Which client do you want to archive? (The name you enter has to match the client folder name)"
$sourceDir = '<production share letter>:\Shares\<share name>\' + $Client
$targetDir = '<archive share letter>:\Shares\<share name>\' + $Client
#Find all folders that have ARC_ at the beginning of their name.
Get-ChildItem $sourceDir -filter "ARC_*" -recurse -Attributes Directory |
#For each folder found run through and create a new item and move the content over. Due to the renaming and removing of the ARC_ the original folder is left behind so a remove has to be run at the end.
foreach{
$targetFile = $targetDir + $_.FullName.SubString($sourceDir.Length);
$targetFile = $targetFile -replace "ARC_","";
$sourceFile = $_.FullName
echo $sourceFile
New-Item -ItemType Directory -Path $targetFile -Force;
robocopy.exe $sourceFile $targetFile /COPY:DAT /E /MOVE /R:1 /W:1
$targetFile += "`r`n"
$reportLocation += $sourceFile, "has been moved to", $targetFile
}
#Send the email out informing IT what has been archived
$body = "The following folders have been archived:`r" + $reportLocation
Send-MailMessage -From "it.systems@whopaysthebills.com" -To "whogetspaid@whopaysthebills.com" -Subject "Clients Archive" -Body $body -SmtpServer "willyouletme.thiswontwork"
echo "Archiving has completed! Now bugger off to the pub and have a well earnt Appletini"
Now obviously you can drink whatever you want (this script was written many years ago when Scrubs was my main TV viewing) but once this data is moved to another drive, it still exists! For a few businesses I have assisted, this simply moved the data from A to B but still required storing, backing up and hosting. All of which does not remove the headache. So an extra step is required.
Once this data is backed up to offsite (and offline to air gap any ransomware) then we can remove the content and leave the file name behind, making it easy for anyone to ask for the file back (they have the file path and original file name after all as they can see it visually in the Archive location via SMB or AFP etc) and we once again run a separate powershell script that will clear the content ( I have always found its never best to run these two scripts as one due to the damage they can cause) and save you the space of hosting many terabytes over many years of a successful career in IT (competition breads innovation after all!). You will obviously need to rewrite some aspects such as the target location (Z:\ may not be your favorite choice of an archive drive letter) and you may have a more flat file structure and therefore not require a client variable.
######
#
# A little script to wipe content from all data from the folder you specify...be careful!
#
# USAGE:
#
# Specify the client name as listed on the file server. If it does not match, then you could wipe many #terabytes of data you have yet to keep offline, elsewhere. If that happens then it's your fault for #randomly running a script you found on the internet without understanding what they are capable of. # I will always put comments in my scripts to help everyone understand my random typing. It is the #true sign of a mad man after all and if you don't understand it and still run it..cool....FFS!
#
# By Mike Donaldson (tekctrl@gmail.com)
#
######
#Specifiy the source folder
$Client = Read-Host "Which client folder are you clearing up?"
$JobNumber = Read-Host "Which Job folder are you clearing up?"
$sourceDir = ('Z:\' + $Client + "\" + $JobNumber )
$sourceSearch = ($sourceDir + '*')
$illegalCharacters = "[\[\]+]'"
echo $sourceSearch
#Find all folders that have been marked for being dumb
Get-ChildItem $sourceSearch -recurse -Force |
#For each item thats recursively in the sourceSearch, we wipe the content of the file, leaving the file name and thats it. 0kb for each file makes it much easier to look after.
foreach{
if ( $_.Name -match $illegalCharacters )
{
rename-item -LiteralPath $_.FullName -NewName ($_.Name -replace $illegalCharacters,'-')
}
clear-content $_.FullName
}
tekctrl
Trying to make everyones digital life a little easier
Wednesday, 24 April 2019
Tuesday, 19 January 2016
Windows Local/Group policy edit gone wrong and locked out
Admittedly this was a first for me and I'm not 100% sure what caused it but I applied my secure baseline.inf to a new Windows Server 2012 R2 server and after around a day the server admin couldn't login to the server via RDP as he was getting an error "The system administrator has restricted the type of logon (network or interactive) that you may use. For assistance, contact your system administrator or technical support." (which he indeed did do!).
Now the secure baseline does a lot of changes to a system including, renaming the admin account, specifying the protocol and encryption level for RDP and stating who can do what on the system, so I undid the changes I made regarding RDP just to make sure it was all OK but alas the same error. I even switched off all GPOs, imported the default baseline into Local Security Policy and rebooted but the damn thing still told me to piss off! It seemed like one of the settings was still hanging around regardless and I didn't have to time to delve into the registry and find it so I ran 1 simple command which resets it all back to the way it was as Microsoft originally intended
Now all I have to do is revisit my baseline and update it and confirm all is well again...Might be needing a drink for this one!
Now the secure baseline does a lot of changes to a system including, renaming the admin account, specifying the protocol and encryption level for RDP and stating who can do what on the system, so I undid the changes I made regarding RDP just to make sure it was all OK but alas the same error. I even switched off all GPOs, imported the default baseline into Local Security Policy and rebooted but the damn thing still told me to piss off! It seemed like one of the settings was still hanging around regardless and I didn't have to time to delve into the registry and find it so I ran 1 simple command which resets it all back to the way it was as Microsoft originally intended
secedit /configure /cfg %windir%\inf\defltbase.inf /db defltbase.sdb /verbose
Once I ran this everything was back to normal and there was no need to reboot the machine at all.
Now all I have to do is revisit my baseline and update it and confirm all is well again...Might be needing a drink for this one!
Location:
London, UK
Tuesday, 27 October 2015
Upgrading VirtualCenter 2.5 on Server 2008
I know its all old tech (Windows Server 2008 and VirtualCenter 2.5) but we have an old ESX 3.5 server that a dev of ours uses and it was licensed by a .lic file hosted on a 2003 server.
Now as we all know 2003 was dropped a few months back so when I found this little f*cker I decided to upgrade it to 2008 (its 32-bit so that's as far as I can go with it). Once the server was upgraded and 2008 was working was when the troubles really started...The key thing here would of been to upgrade VMware BEFORE I updated to 2008 but (my mistake entirely) I thought maybe someone would of updated this to the latest version 2 years ago when it was last in use and VMware released the updates but there you go...
First off the license server wouldn't start as it said it couldn't read the license files. This is an known issue with VMware license server and WS 2008. To get the license server working you need to download the VirtualCenter 2.5 Update6c zip and run bin\VMware-licenseserver.exe file in compatibility mode. If you don't then you will get an authorization error saying you don't have permission and you'll be stuck on an older version. Easy enough...
The slightly more difficult job was upgrading VirtualCenter 2.5 to U6 and then to U6c. If you run the msi normally (VMware VirtualCenter Server.msi) it will fail saying its already installed and not offer to upgrade you. If you run the VMware-vcserver.exe it will pop up with an error saying you need to be running Windows 2000, XP or Windows 2003 to install this product. Since those days are long gone as you're on 2008+ you need to edit the VMware VirtualCenter Server.msi file (the .exe refers and runs it) and tell it to ignore the upper limit of the OS version. Using Microsofts Orca editor you can open the .msi and find the section that mentions LaunchCondition. Edit the line "VersionNT>=500 And VersionNT<600" and remove the "And VersionNT<600" so that it won't bother checking if you are running 2003+. Now just allow compatibility mode for the .exes you want to run and they will actually run and allow you to upgrade!
Occasionally the upgrade will tell you that files are in use and list the PID number of the application (sometimes just the PID number) so have cmd prompt running as admin handy and run taskkill /F /PID <PID number> to kill them all off and let you continue.
I had an issue after it all ran as my SQL DB (VIM_VCDB) was full so the DB upgrade script failed. I cleared the SQL logs and then re-ran the database upgrade (to rerun go to C:\Program Files\VMware\Infrastructure\VirtualCenter Server\dbupgrade and run VCDatabaseUpgrade.exe)
I thought I was in the clear but the VMware Infrastructure Server service still wouldn't start. When I tried to run vpxd.exe via cmd prompt I saw the following error:
[VpxdVdb] Database version value 'VirtualCenter Database 2.5u2' is incompatible with this release of VirtualCenter.
Now even though the DBupgrade said it was successful it did not mark the DB as upgraded so my DB still throught it was running 2.5 Update 2 and not the upgraded 2.5 Update 6. Odd and I thought I could just edit the table in SQL that contains this info but I thought it may be best to go down the proper route. I ran the dbupgrade again but via cmd prompt and using an extra switch
VCDatabaseUpgrade.exe DSN="VMware VirtualCenter" MINORDBUPGRADE=1
My ODBC System DSN was "VMware VirtualCenter" but yours may be different so check first. Once that ran I restarted SQL and the VMware services and bingo I was in and everything was working fine. I had to run both updates as you can only upgrade to Update 6c from Update 6.
Now as we all know 2003 was dropped a few months back so when I found this little f*cker I decided to upgrade it to 2008 (its 32-bit so that's as far as I can go with it). Once the server was upgraded and 2008 was working was when the troubles really started...The key thing here would of been to upgrade VMware BEFORE I updated to 2008 but (my mistake entirely) I thought maybe someone would of updated this to the latest version 2 years ago when it was last in use and VMware released the updates but there you go...
First off the license server wouldn't start as it said it couldn't read the license files. This is an known issue with VMware license server and WS 2008. To get the license server working you need to download the VirtualCenter 2.5 Update6c zip and run bin\VMware-licenseserver.exe file in compatibility mode. If you don't then you will get an authorization error saying you don't have permission and you'll be stuck on an older version. Easy enough...
The slightly more difficult job was upgrading VirtualCenter 2.5 to U6 and then to U6c. If you run the msi normally (VMware VirtualCenter Server.msi) it will fail saying its already installed and not offer to upgrade you. If you run the VMware-vcserver.exe it will pop up with an error saying you need to be running Windows 2000, XP or Windows 2003 to install this product. Since those days are long gone as you're on 2008+ you need to edit the VMware VirtualCenter Server.msi file (the .exe refers and runs it) and tell it to ignore the upper limit of the OS version. Using Microsofts Orca editor you can open the .msi and find the section that mentions LaunchCondition. Edit the line "VersionNT>=500 And VersionNT<600" and remove the "And VersionNT<600" so that it won't bother checking if you are running 2003+. Now just allow compatibility mode for the .exes you want to run and they will actually run and allow you to upgrade!
Occasionally the upgrade will tell you that files are in use and list the PID number of the application (sometimes just the PID number) so have cmd prompt running as admin handy and run taskkill /F /PID <PID number> to kill them all off and let you continue.
I had an issue after it all ran as my SQL DB (VIM_VCDB) was full so the DB upgrade script failed. I cleared the SQL logs and then re-ran the database upgrade (to rerun go to C:\Program Files\VMware\Infrastructure\VirtualCenter Server\dbupgrade and run VCDatabaseUpgrade.exe)
I thought I was in the clear but the VMware Infrastructure Server service still wouldn't start. When I tried to run vpxd.exe via cmd prompt I saw the following error:
[VpxdVdb] Database version value 'VirtualCenter Database 2.5u2' is incompatible with this release of VirtualCenter.
Now even though the DBupgrade said it was successful it did not mark the DB as upgraded so my DB still throught it was running 2.5 Update 2 and not the upgraded 2.5 Update 6. Odd and I thought I could just edit the table in SQL that contains this info but I thought it may be best to go down the proper route. I ran the dbupgrade again but via cmd prompt and using an extra switch
VCDatabaseUpgrade.exe DSN="VMware VirtualCenter" MINORDBUPGRADE=1
My ODBC System DSN was "VMware VirtualCenter" but yours may be different so check first. Once that ran I restarted SQL and the VMware services and bingo I was in and everything was working fine. I had to run both updates as you can only upgrade to Update 6c from Update 6.
Tuesday, 13 October 2015
Getting LDAPS working in CentOS 7.1
We have a new Drupal server and wanted to connect it up to LDAP. It sits over in a DC where we have no AD servers and of course it needs LDAP to make it easier for users to login and not have to remember 1001 passwords, which is fair enough. Now for this we obviously need to run it over LDAPS as I don't know about anyone else but I don't want the f*ckers of this world getting my AD password when its passed in clear text over the internet and authenticates etc.
Now when i tried to connect to the RODC via LDAPS (port 636) using ldapsearch it just failed with:
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
Peer's certificate issuer is not recognized...
We already had the CA cert and the RODC cert in /etc/ssl/certs but obviously this isn't the right thing to do as linux needs to actually fully trust our CA. Easy way to add a CA to CentOS is to run the following:
cp ca.cer /etc/pki/ca-trust/source/anchor
update-ca-trust extract
No need to worry about converting a certificate from a Microsoft server to a .pem etc as I just used it against our CA server certificate which is a .cer and it worked a treat. I ran a LDAP test in Drupal and got the result I wanted straight away so no need to restart or anything...however if you wanted to restart the server and pop to the pub while it "starts back up again" then I wouldn't blame you ;)
Now when i tried to connect to the RODC via LDAPS (port 636) using ldapsearch it just failed with:
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
Peer's certificate issuer is not recognized...
We already had the CA cert and the RODC cert in /etc/ssl/certs but obviously this isn't the right thing to do as linux needs to actually fully trust our CA. Easy way to add a CA to CentOS is to run the following:
cp ca.cer /etc/pki/ca-trust/source/anchor
update-ca-trust extract
No need to worry about converting a certificate from a Microsoft server to a .pem etc as I just used it against our CA server certificate which is a .cer and it worked a treat. I ran a LDAP test in Drupal and got the result I wanted straight away so no need to restart or anything...however if you wanted to restart the server and pop to the pub while it "starts back up again" then I wouldn't blame you ;)
Friday, 6 March 2015
Search a whole network for pingable devices
Sometimes it can be handy to check what devices are live on a network. You can just ping your broadcast address with Windows and that will work (albeit in whatever order the devices respond to you) but of course what if you want to quickly scan another network? You can't ping a broadcast address for a network your PC is not actually on. Don't worry however as it's easily done with a quick command
Simply run this command and you'll find a text file with all the answers (this file will only contain the IP addresses which have responded). Just change the details to suite your network.
FOR /L %i IN (1,1,254) DO ping -n 1 192.168.1.%i | FIND /i "Reply" >> c:\foundyou.txt
The brackets in the command contain the finer details. The first number is the IP you want to start with (in this case 1 so the ping will do 192.168.1.1) and the second number is the incremental amount you want to add to ping the next IP. So dThe 2nd number is 1 so the next IP to ping will be 192.168.1.2. If we were to have put a 2 there then it would be 1 + 2 so the next IP to ping would of been 192.168.1.3. Finally the last number is the last number to get to and stop. In our case it is 254 so we would have ping'd every IP in the subnet of 192.168.1.x/24. If we only wanted to ping 192.168.1-25 then we'd change the last number to 25.
Also just change the file path to somewhere you can actually save files to or you'll get an access is denied error...unless of course you do a Run as administrator for your command prompt. Now while this is running, it's a perfect time to get a quick pint in at the local!
Simply run this command and you'll find a text file with all the answers (this file will only contain the IP addresses which have responded). Just change the details to suite your network.
FOR /L %i IN (1,1,254) DO ping -n 1 192.168.1.%i | FIND /i "Reply" >> c:\foundyou.txt
The brackets in the command contain the finer details. The first number is the IP you want to start with (in this case 1 so the ping will do 192.168.1.1) and the second number is the incremental amount you want to add to ping the next IP. So dThe 2nd number is 1 so the next IP to ping will be 192.168.1.2. If we were to have put a 2 there then it would be 1 + 2 so the next IP to ping would of been 192.168.1.3. Finally the last number is the last number to get to and stop. In our case it is 254 so we would have ping'd every IP in the subnet of 192.168.1.x/24. If we only wanted to ping 192.168.1-25 then we'd change the last number to 25.
Also just change the file path to somewhere you can actually save files to or you'll get an access is denied error...unless of course you do a Run as administrator for your command prompt. Now while this is running, it's a perfect time to get a quick pint in at the local!
Wednesday, 5 March 2014
Finding empty ports on a Cisco switch
Now I've started working for a new agency that has cisco switches that we look after internally and with most media agencies, there always seems to be desk moves etc. With this in mind it is always important to find free ports on the switches before a move.
Now 95% of our users have laptops so you cannot rely on the port not being active today as the user could be working away from their desk and plug back in using their ethernet cable when they are back.
A very easy way of finding free ports and when they were last used is to login to the cisco switch and run the following command:
Sh int | inc Gig|Last input
This will list all ports on the switch and show their status (connected or not connected) along with the last time data was passed through. As you can see below Port 23 is very much still in use and Port 24 is disconnected and was last used 11 weeks and 5 days ago, so I would say it is safe to re-use this port. Now remember to always check the config of the port before you re-assign the port as it's config could be important and required in the future for some random legacy system etc.
GigabitEthernet2/0/23 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is 5835.d981.9a97 (bia 5835.d981.9a97)
Last input 00:00:04, output 00:00:09, output hang never
GigabitEthernet2/0/24 is down, line protocol is down (notconnect)
Hardware is Gigabit Ethernet, address is 5835.d981.9a98 (bia 5835.d981.9a98)
Last input 11w5d, output 11w5d, output hang never
Now you've saved time by not having to crawl under desks and find unpatched ports and free up ports, you can pop down the pub and have a few well earnt pints!
Now 95% of our users have laptops so you cannot rely on the port not being active today as the user could be working away from their desk and plug back in using their ethernet cable when they are back.
A very easy way of finding free ports and when they were last used is to login to the cisco switch and run the following command:
Sh int | inc Gig|Last input
This will list all ports on the switch and show their status (connected or not connected) along with the last time data was passed through. As you can see below Port 23 is very much still in use and Port 24 is disconnected and was last used 11 weeks and 5 days ago, so I would say it is safe to re-use this port. Now remember to always check the config of the port before you re-assign the port as it's config could be important and required in the future for some random legacy system etc.
GigabitEthernet2/0/23 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is 5835.d981.9a97 (bia 5835.d981.9a97)
Last input 00:00:04, output 00:00:09, output hang never
GigabitEthernet2/0/24 is down, line protocol is down (notconnect)
Hardware is Gigabit Ethernet, address is 5835.d981.9a98 (bia 5835.d981.9a98)
Last input 11w5d, output 11w5d, output hang never
Now you've saved time by not having to crawl under desks and find unpatched ports and free up ports, you can pop down the pub and have a few well earnt pints!
Wednesday, 22 January 2014
Syslogging a Cisco switch
Now when it comes to looking after a large IT estate, it can be a pain to keep track of all those errors and warnings in logs. Most people put alerting in place for their servers and firewalls which is great, however you can never forget those poor switches that carry all that data around your network. If one of those fails then it is back to using carrier pigeons for you!
To configure a cisco switch to send out syslogs to a server you need to first of all enter config mode
tekctrl-3570-01#conf t
Then you need to set the switch to timestamp the logs, otherwise it won't make any sense in your syslog server
tekctrl-3570-01(config)#service timestamps log datetime
You need to then inform the cisco switch where to send the logs to
tekctrl-3570-01(config)#logging x.x.x.x
You then need to configure the level of events the cisco switch will send over. 7 is the most active (debug) and will send EVERYTHING across. I have set this one to 4 (warning) so that any emergencies, alerts, critical errors and warnings are sent over but notification information such as someone plugging a cable into a port is not sent over (because come 17:30 your syslog server will be filled with damn near every good user powering down and disconnecting from the switches!)
tekctrl-3570-01(config)#logging trap 4
The next line will also set the local level of logging for the cisco switch. local7 is the default so we might as well stick with it
tekctrl-3570-01(config)#logging facility local7
Now exit the config mode
tekctrl-3570-01(config)#end
And for f*ck sake save the running config or you'll be kicking yourself (and I'll be kicking you too!) when the switch reloads and all this heavenly syslog glory is lost..
tekctrl-3570-01#copy running-config startup-configDestination filename [startup-config]?
Building configuration...
[OK]
Now confirm the logs are appearing in your syslog server and f*ck off to the pub and let the syslog server deal with all the whining and moaning your Cisco kit is producing
Location:
London, UK
Saturday, 1 June 2013
Showing when a page last updated
A handy little trick I often use to check on the last time a web page was last updated (handy when having to reference the site it check it's activity)
javascript:alert(document.lastModified)
When using Chrome however, you need to manually type this in because when you copy and paste this in, Chrome will remove the javascript: section and therefore you'll just run a needless Google search on "alert(document.lastModified)"
It even works in Android (Chrome for Android does not remove the JavaScript: entry either)
Give it a try and crack open a newly brewed Budweiser to celebrate (and don't forget to check the born date on those bad boys just for consistency)
javascript:alert(document.lastModified)
When using Chrome however, you need to manually type this in because when you copy and paste this in, Chrome will remove the javascript: section and therefore you'll just run a needless Google search on "alert(document.lastModified)"
It even works in Android (Chrome for Android does not remove the JavaScript: entry either)
Give it a try and crack open a newly brewed Budweiser to celebrate (and don't forget to check the born date on those bad boys just for consistency)
Tuesday, 26 March 2013
Restoring deleted items from Public Folders natively
This morning I had a user call up and say that half the meeting room calendars (which are public folders) were empty. I checked via Exchange Management Console and could confirm that there were no items. However under the statistics for the folder, I noticed that it was showing a total deleted items size.
I decided to check this out through ExFolders (Exchange 2010 replacement for PFDAVAdmin) and did come across a problem as I was getting the error "An error occurred while trying to establish a connection to the exchange server. Exception: the Active Directory user wasn't found". To get past this issue open ADSIEDIT and select Configuration from the Well Known Naming Context drop down menu. Then drill down to Configuration> Services> Microsoft Exchange> Domain Name> Administrative Groups> First Administrative Group> and then delete the Servers object. This can sometimes be left behind from old Exchange 2003 installs. As soon as that is gone then ExFolders can continue (please read the "read me" for ExFolders as it does specify to run the reg edit file and also to move ExFolders.exe to your Exchange location\bin\ folder, which is generally in <drive>:\Program Files\Microsoft\Exchange Server\V14\Bin. It will crash otherwise).
After finding one of the folders that had its content deleting I noticed that at the bottom, there is a "normal contents" and "deleted contents" radio buttons. Unsurprisingly, selecting "deleted contents" brings up the list of deleted items. To restore them it is a simple task of selecting the items, right clicking and selecting "restore items". Bingo the items are back and I didn't even need to get out of my chair to get to the backup tapes. Which is handy as it is in the opposite direction to the pub...
I decided to check this out through ExFolders (Exchange 2010 replacement for PFDAVAdmin) and did come across a problem as I was getting the error "An error occurred while trying to establish a connection to the exchange server. Exception: the Active Directory user wasn't found". To get past this issue open ADSIEDIT and select Configuration from the Well Known Naming Context drop down menu. Then drill down to Configuration> Services> Microsoft Exchange> Domain Name> Administrative Groups> First Administrative Group> and then delete the Servers object. This can sometimes be left behind from old Exchange 2003 installs. As soon as that is gone then ExFolders can continue (please read the "read me" for ExFolders as it does specify to run the reg edit file and also to move ExFolders.exe to your Exchange location\bin\ folder, which is generally in <drive>:\Program Files\Microsoft\Exchange Server\V14\Bin. It will crash otherwise).
After finding one of the folders that had its content deleting I noticed that at the bottom, there is a "normal contents" and "deleted contents" radio buttons. Unsurprisingly, selecting "deleted contents" brings up the list of deleted items. To restore them it is a simple task of selecting the items, right clicking and selecting "restore items". Bingo the items are back and I didn't even need to get out of my chair to get to the backup tapes. Which is handy as it is in the opposite direction to the pub...
Labels:
backup,
Exchange,
ExFolder,
PFDAVAdmin,
public folder,
restore,
Server
Location:
Canary Wharf, London E14, UK
Thursday, 21 March 2013
Cannot login to BES Express 5
I recently installed a new BES Express 5 server at a client site as their old BES 4.1 environment was becoming a bit unreliable. Thought everything installed perfectly and no error messages appeared, whenever I tried to login as either besadmin or the domain administrator, it said that "The username, password, or domain is not correct. Please correct the entry."
I was 100% sure all the details were correct and oddly after a few attempts just to make sure, the besadmin account became locked. This meant that it was passing the details through to AD at least so I knew it wasn't broken in that aspect. I had looked through a few forums and noted that some people suggested adding Kerberos authentication to the besadmin account and this will do the trick. I have installed a few BES environments and never had to do this, so I decided to skip that step. I did notice that both passwords contained a £ or #, which are unsupported characters. I changed the besadmin password to not contain the £ and changed the password for each of the services that used the besadmin account on the new BES server. I also entered in the new password into Blackberry Server Configuration> Administrator Service - AD Settings tab and then restarted the server.
Once this rebooted and all the services started up OK, I could happily login and begin to move users across using the Blackberry Transporter Tool.
For more information on unsupported characters in Blackberry service passwords and how to troubleshoot authentication issues please see Blackberry KB19200 and then go to the local for a nice ale and save yourself 1p ;)
I was 100% sure all the details were correct and oddly after a few attempts just to make sure, the besadmin account became locked. This meant that it was passing the details through to AD at least so I knew it wasn't broken in that aspect. I had looked through a few forums and noted that some people suggested adding Kerberos authentication to the besadmin account and this will do the trick. I have installed a few BES environments and never had to do this, so I decided to skip that step. I did notice that both passwords contained a £ or #, which are unsupported characters. I changed the besadmin password to not contain the £ and changed the password for each of the services that used the besadmin account on the new BES server. I also entered in the new password into Blackberry Server Configuration> Administrator Service - AD Settings tab and then restarted the server.
Once this rebooted and all the services started up OK, I could happily login and begin to move users across using the Blackberry Transporter Tool.
For more information on unsupported characters in Blackberry service passwords and how to troubleshoot authentication issues please see Blackberry KB19200 and then go to the local for a nice ale and save yourself 1p ;)
Monday, 18 March 2013
M Audio Delta card not working in OpenSUSE 12.3
Now I keep flipping between various Linux distributions. Recently I decided that I will run OpenSUSE 12.3 on my desktop and Ubuntu on my laptop. Now I have an old M Audio Delta 66 sound card, which I have ALWAYS had issues with (not only in Linux but Windows too from Vista upwards) and this was no different with OpenSUSE 12.3. There was no audio at all and I could not change any settings at all in System Settings > Sound (If I tried to load this up, it would crash).I ran lspci and could see the item was listed:
Now I could see the sound card listed (Its the ICE1712 device) so I knew it was running at least. I've never been able to get this to work with pulse audio running so I removed this and installed alsa tools. To install this run the following:
After this has been installed I was able to edit the various inputs I use through the card (I use the Omni Studio expanded unit also). Now it depends on how you have yours set up, though I use the monitor outputs for my M Audio BX-8a speakers (my setup in Yast is listed below and as you can see I just use the DAC outputs, which are the monitor outputs on the soundcard). I also removed Pulseaudio as I no longer required this
I also removed pulse audio since I was no longer using it, however you can leave it installed if you really want (to uninstall just run sudo zypper remove pulseaudio). Now for MP3 support you will need to install the codecs for these as free and open source software is not allowed to package these together. There's two ways to do this:
Simply click on this link and it will install all the relevant codecs you need for multimedia playback (MP3 and DVD etc)
To install all the codecs you need you will first of all need to run the following commands in the terminal:
Add the needed repositories (skip the dvd repo if you don't need DVD playback)
mike@Mike-Suse-PC:~> /sbin/lspci | grep -i audio 02:07.0 Multimedia audio controller: VIA Technologies Inc. ICE1712 [Envy24] PCI Multi-Channel I/O Controller (rev 02) 05:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI RV770 HDMI Audio [Radeon HD 4850/4870]
Now I could see the sound card listed (Its the ICE1712 device) so I knew it was running at least. I've never been able to get this to work with pulse audio running so I removed this and installed alsa tools. To install this run the following:
sudo zypper install alsa-tools
After this has been installed I was able to edit the various inputs I use through the card (I use the Omni Studio expanded unit also). Now it depends on how you have yours set up, though I use the monitor outputs for my M Audio BX-8a speakers (my setup in Yast is listed below and as you can see I just use the DAC outputs, which are the monitor outputs on the soundcard). I also removed Pulseaudio as I no longer required this
I also removed pulse audio since I was no longer using it, however you can leave it installed if you really want (to uninstall just run sudo zypper remove pulseaudio). Now for MP3 support you will need to install the codecs for these as free and open source software is not allowed to package these together. There's two ways to do this:
The Easy way
Simply click on this link and it will install all the relevant codecs you need for multimedia playback (MP3 and DVD etc)
The Terminal way
To install all the codecs you need you will first of all need to run the following commands in the terminal:
Add the needed repositories (skip the dvd repo if you don't need DVD playback)
zypper addrepo -f http://ftp.gwdg.de/pub/linux/packman/suse/12.3/ packman zypper addrepo -f http://opensuse-guide.org/repo/12.3/ dvd
Now this will add in the repositories needed to install all the items you need and keep them up to date also. The next command you need to run will command OpenSUSE to install the various codecs you need:
zypper install libxine2-codecs k3b-codecs ffmpeg lame gstreamer-0_10-plugins-bad gstreamer-0_10-plugins-ugly gstreamer-0_10-plugins-ugly-orig-addon gstreamer-0_10-plugins-ffmpeg libdvdcss2
Now once everything is installed and you have some glorious music playing out, I'd say it would be prime time to grab a fine ale and save yourself 1p ;)
Sunday, 20 January 2013
Time and Date missing from Ubuntu
I've been trying to use linux more and more as the command lines I know have always come in handy on some UNIX servers I've had to look after and macs also. So I am currently running OpenSUSE 12.2 on a laptop and Ubuntu 12.10 on my PC. I'm not sure how I did it, but while I was ripping bits of software out of Ubuntu, the time and date went missing from the top right hand corner of my display. I searched in Unity for Time and Date and alas it was not there.
Now since it was only the indicator that was missing, it was a simple case of re-installing the indicator again. To do this run the following command in the terminal
Now after you have installed this you have two options, either log out and then log back in again (bit too Microsoft-like for me) or you can kill off your unity program, which automatically restarts. To do this run the following command:
As you can see from the screenshot above, it will come back with the PID of the program (number next to the user) and the one you want to kill of is /usr/lib/unity/unity-panel-service. To kill this service off just run the command:
You will see unity close down, refresh and then the time will appear again in the top right hand corner. Now would be a perfect time (no matter what the time now says on your PC) to have a beer.
Now since it was only the indicator that was missing, it was a simple case of re-installing the indicator again. To do this run the following command in the terminal
sudo apt-get install indicator-datetime
Now after you have installed this you have two options, either log out and then log back in again (bit too Microsoft-like for me) or you can kill off your unity program, which automatically restarts. To do this run the following command:
ps -ef | grep unity-panel-service
As you can see from the screenshot above, it will come back with the PID of the program (number next to the user) and the one you want to kill of is /usr/lib/unity/unity-panel-service. To kill this service off just run the command:
kill <PID you just discovered using ps -ef, which in my example above it 4643>
You will see unity close down, refresh and then the time will appear again in the top right hand corner. Now would be a perfect time (no matter what the time now says on your PC) to have a beer.
Monday, 19 November 2012
Windows 8 not showing your content in Music, Photos and Videos Apps
I recently upgraded to Windows 8 and have found it strange to get used to, from the Win + I to bring up options for apps to getting used to not having the start menu. However the biggest problem I had was opening the My Music, Photos and Video apps and it not showing ANY of my content. I'd read other users having this issue, but with NAS and other network shares etc but mine is stored on a separate local SATA drive. My Libraries were set to view this information and selecting my Music Library showed all this music that I couldn't use in the My Music app.
There is however a way to fix this! The Music, Photos and Video apps not only use your Libraries to gather media data, it also uses the Windows indexing tool. Now considering my media was on another drive and a while back (for whatever reason I cannot actually re-call) I had disabled the Windows Indexing from looking at my other drives so when I upgraded from Win7 to Win8 this area had never been indexed. So I went into Control Panel>Indexing Options>clicked on Modify and selected these extra folders. I then went into Advanced and kicked off a rebuild to hurry things along and start fresh.
After a while of indexing I opened my media Apps in Win8 and the content was all there! Now once you open these applications they begin to then load up previews and thumbnails etc so again once you open the application you must wait a little while (my music took an hour to fully sort itself out) however once its all done everything is there in all its glory. No registry hacks or anything like that, just a slight tweak to what you want indexing. I think I still prefer how Zune works to the new My Music app as I can see much more, however I'm sure it (along with Windows 8) will grow on me. Now time for a beer or a lovely little glass of Glayva to celebrate ;)
There is however a way to fix this! The Music, Photos and Video apps not only use your Libraries to gather media data, it also uses the Windows indexing tool. Now considering my media was on another drive and a while back (for whatever reason I cannot actually re-call) I had disabled the Windows Indexing from looking at my other drives so when I upgraded from Win7 to Win8 this area had never been indexed. So I went into Control Panel>Indexing Options>clicked on Modify and selected these extra folders. I then went into Advanced and kicked off a rebuild to hurry things along and start fresh.
After a while of indexing I opened my media Apps in Win8 and the content was all there! Now once you open these applications they begin to then load up previews and thumbnails etc so again once you open the application you must wait a little while (my music took an hour to fully sort itself out) however once its all done everything is there in all its glory. No registry hacks or anything like that, just a slight tweak to what you want indexing. I think I still prefer how Zune works to the new My Music app as I can see much more, however I'm sure it (along with Windows 8) will grow on me. Now time for a beer or a lovely little glass of Glayva to celebrate ;)
Monday, 21 May 2012
Find email address in Public Folders
Looking after several thousand mailboxes, public folders and distribution groups can sometimes make it difficult for you to keep track on where certain email aliases are assigned.
Now mailboxes, distribution groups and mail contacts are easy as you can add a filter in the Exchange Management Console to find these, however the same cannot be applied for Public folders.
Once again PowerShell comes to the rescue in the form of the Exchange Management Shell. Running the following command will bring up all the details you need:
Get-MailPublicFolder email@address.com | Get-PublicFolder
This command will bring up the name of the public folder along with the parent path so you know exactly where the public folder lives by showing the folder name and the parent path so you can navigate to the folder in question.
Not only do you get to find the public folder you get to learn a little more about Power Shell with the nice "Tip of the day" that appear in the Exchange Management Shell
Now that deserves a beer!
Now mailboxes, distribution groups and mail contacts are easy as you can add a filter in the Exchange Management Console to find these, however the same cannot be applied for Public folders.
Once again PowerShell comes to the rescue in the form of the Exchange Management Shell. Running the following command will bring up all the details you need:
Get-MailPublicFolder email@address.com | Get-PublicFolder
output from Get-MailPublicFolder |
This command will bring up the name of the public folder along with the parent path so you know exactly where the public folder lives by showing the folder name and the parent path so you can navigate to the folder in question.
Not only do you get to find the public folder you get to learn a little more about Power Shell with the nice "Tip of the day" that appear in the Exchange Management Shell
Now that deserves a beer!
WinSXS taking up space after Service Pack install
I was working on a users desktop the other day who had an SSD installed a while back (back when 128GB were silly prices!) and it could only hold 64GB and it was full (well around 500kb free). This meant her .ost for Outlook couldn’t expand, causing no end of problems for her. Cleaning out temp files and rebuilding the .ost managed to bring back around 2GB which just wasn’t enough really so after hunting around on the drive to find where the rest of the space had gone I found the WinSXS folder. Now this folder was changed from XP to Vista quite a bit from the old .INF files being in there in XP to .mui, .exe’s in Vista and beyond. This folder allows you to run application such as SFC (System File Checker) or when installing additional features and roles in Server 2008 etc. As handy as this is, it can take up a great deal of space, especially when Service Packs are installed (which was the case for this poor user). To help clean this folder up there is a handy tool built into Windows, which can be run from the command prompt (you need to run the command prompt in Elevated Mode to run. To do this, hold down shift and right click on the command prompt icon and select Run as administrator…) which does a nice job of doing it all for you. Please bear in mind after this runs, you won’t be able to roll-back from the Service Packs. The command to run is below:
DISM /online /Cleanup-Image /SpSuperseded
This will take around 20-30minutes to run (depending on the OS and how much space it can reclaim) and can usually bring back around 3-5GB, which is the perfect amount of time to crack open a can of beer
DISM /online /Cleanup-Image /SpSuperseded
This will take around 20-30minutes to run (depending on the OS and how much space it can reclaim) and can usually bring back around 3-5GB, which is the perfect amount of time to crack open a can of beer
WinSXS size before command was run |
WinSXS size after command was run |
Subscribe to:
Posts (Atom)