Tuesday, October 7, 2014

Adventures in Git OR How to fork folders in one Repo to their own repo with history

In short, I had one repo, with multiple folders where each really represented its own project and I needed to fork each folder to a separate repo WITH history. This is the solution I came up with. This does require git 1.8.4 (git subtree split).


git subtree split --prefix=sourceFolderInAnyExistingProject -b anyNewBranchNameForFork

mkdir ../someNewRepoFolder

cd ../someNewRepoFolder

git init --bare

cd backToOriginalWorkingFolder

git push ../someNewRepoFolder anyNewBranchNameForFork:master

cd ../someNewRepoFolder

git remote add origin YourNewRepoRemote

git push -u origin --all

cd ..

rm -rf somenewRepoFolder

git clone YournewRepoRemote someNewRepoFolder



Sorry if that's not clear enough, but basically, you use git subtree split to take ANY folder presently in a repo and create a branch that's filtered to have just that folder and all the history for it. Then you create a bare repo, push the branch into it, then push that repo (without working folder) to your new repo, then either trash the folder or clone the repo into another one and BAM, you've forked not only an existing repo, but just one folder in it if you prefer (I had many "projects" in one repo that needed to be forked individually so this was useful for me) and history stays intact.

Friday, April 4, 2014

SuperAdmin (GodMode) folder for Windows 8.1

I didn't think there were any cool tricks left for Windows that I wasn't aware of but I just learned otherwise.

Create a folder using one of the following names based on your version of Windows and have access to a SuperAdmin folder that makes finding all sorts of admin shortcuts as easy to find as they always SHOULD have been. I've only tested the 8.1 but am under the impression that it should work for 7 and 8 also.

Windows 8.1 - SuperAdmin.{ED7BA470-8E54-465E-825C-99712043E01C}


Friday, March 7, 2014

SSL Certs for IIS with PFX once and for all - SSL and IIS Explained

The problem is more common than dirt but the solutions provided are so often entirely incorrect and obscure and that's likely because the "Linux and Apache" folks are trying to use their standard approach with MS platforms that like to do their own thing.

Here's what you need to understand:

CSR - Certificate Signing Request: A CSR is how you ask for a certificate and provides things like your identify, the use, etc..

KEY - Private Key: When someone uses the public key that you share with the world (like that which your web server will give the client and they will use to encrypt traffic), your possession of the private key is how you prove (as a web site) that you are master. You encrypt using your private key and the client decrypts with your public key. ANYONE can have your public key and thus anyone can decrypt but since you are the only one with a private key, it is guaranteed (when you control that key correctly) that if the public key works, traffic must have come from you. Similarly, anything encrypted with your public key can ONLY be decrypted using your private key.

CER/CRT/CERT/CERTIFICATE - Public Key: See KEY explanation

PFX: Along comes microsoft and their proprietary way of doing things and the confusion that follows. IIS expects a PFX but the format has nothing to do with the standards world of PKI, keys and certs really. So how do you give them a PFX that includes the private key (needed to encrypted/decrypt your web traffic)? The intended way is for you to generate the CSR using IIS then give that CSR to your CA (internal or public) then get a key back based on the private key known by the creator of the CSR (IIS). This is where things are "all wrong" in the public CA business (from the perspective of us using IIS but really, they have it right). Most likely you created the CSR using your CA or the company reselling for them (cheapssl, gogetssl, etc..). When you do this, they give you a private key to keep safe and a public key to use but IIS/Windows has no concept of the private key so you're dead in the water from the start.


The solution:

Use IIS "Server Certificates" UI to "Generate Certificate Request" (the details of this request are out of the scope of this article but those details are critical). This will give you a CSR prepped for IIS. You then give that CSR to your CA and ask for a certificate. Then you take the CER/CRT file they give you, go back to IIS, "Complete Certificate Request" in the same place you generated the request. It may ask for a .CER and you might have a .CRT. They are the same thing. Just change the extension or use the *.* extension drop-down to select your .CRT. Now provide a proper "friendly name" (*.yourdomain.com, yourdomain.com, foo.yourdomain.com, etc..) THIS IS IMPORTANT! This MUST match what you setup the CSR for and what your CA provided you. If you asked for a wildcard, your CA must have approved and generated a wildcard and you must use the same. If your CSR was generated for foo.yourdomain.com, you MUST provide the same at this step.

Now select the PERSONAL store (No, not webhost). This will import your CRT in the personal store where it can be associated with the private key generated by IIS when it created the CSR. THIS IS WHERE ALL THE PROBLEMS COME FROM. This is what causes SO many headaches. The CRT you got from your CA and the KEY they gave you are useless here unless you do as others might suggest and go play around with other tools like openssl (which can work but why bother when you can do it the way IIS intended?)

Now you should see your cert in the server certificates list and if you open it, you should see something like, "You have a private key that corresponds to this certificate".

Now if you can use the Export function (IIS 8 provides this in the same place as the "request" and "complete request" links) or use the Certificate MMC then navigate to the personal store and export from there to the PFX format. You need to provide a strong password to protect this file because it will have the entire certificate chain AND your private key. In other words, this PFX has the keys to the entire "domain" (speaking figuratively).

Chances are that you don't even need the PFX now because you already have the certificate inside IIS but if you're using the centralized certificate store like I am, you do AND the file name is critical. For wildcard certs, the name MUST be _.yourdomain.com.pfx (assuming your request was for *.yourdomain.com). If you asked for www.somedomain.org, then the filename must be www.somedomain.org.pfx because this is how SNI and centralized SSL store will look for the right one.


In summary, the easy way to install an SSL cert into IIS:
  1. Generate your CSR using IIS
  2. Provide that CSR to your CA
  3. "Complete Request" using the CER/CRT you get back from the CA
  4. [optional] Export to PFX and protect with a strong password
  5. Live long and encrypt
Update (20191201): Even today, I am using this post as I have to renew SSL for a stack that lives both on IIS and Node.js. Because of this, I also need the private key file so if you've done step #4 above and need a .key also, the following might save you some googling. Be sure to protect your .key file when you do this though!

`openssl pkcs12 -in exported.pfx -nocerts -out key.pem -nodes`
then
`openssl rsa -in key.pem -out server.key`

Now you you can use server.key and the cer/crt you were provided.

Tuesday, March 4, 2014

My first experience with VMware vCHS - Part 2 - vCloud Connector

This is the 2nd in a vCHS series. You can find the previous entry HERE.

OnPrem - vCloud Connector (general label for a collection of systems and configuration)


If you don't know what vCloud is or about the vConnector, right now all you need to know is that you can't do actual migrations (it creates vApp templates from powered down VMs) and the only other thing is really useful for are stretch migrations (a vShield networking trick that are only useful in some very rare situations.

VCC does NOT allow you to use the vSphere client to manage vCHS any more than you can do on the web (for the most part, it just gives you a window to the web interface and does some back-end management that you can also initiate from the web).
  • vCloud Connector Server - Provided as a zip containing an OVF that can be easily added to any host onPrem. Once you've installed, visit https://assignedip:5480/ (port 5480 is a theme so learn it). Once you've configured networking, set the time-zone and changed the password, contine to the Node which is the same process.
  • vCloud Connector Node- Provided as a zip containing an OVF that can be easily added to any host onPrem. Once you've installed, visit https://assignedip:5480/
    • Once you've installed the node and done the same basic config (IP/Time-zone/Password), you need to go to the Node/Cloud menu and register your vCenter to the node (the node manages vCenter and the VCCS manages the node).
    • Next you can go back to the VCCS and "register the node".
    • Now we want to register the vCHS node to your VCCS.
      • This is what allows you to use your vCenter to manage onPrem and vCHS together (and any other nodes you register). This is where I hit some frustrating resistance.
      • To register vCHS node you need the Node URL (given to me in my onboarding email), VCD Org Name (not given to me and I had to deduce that it was provided in vCHS vCloud Director. It is the unique ID at the very top of the page, next to the navigation arrows. For me it was MXXXXXXX-XXX. You also need a username and password which you might think should be the same you log into the vCHS portal with but it apparently is not.
      • After a few failed attempts, I was able to navigate to the vCloud Director, My Cloud, Logs, Events and see the failed attempts. Despite multiple emails and a phone call to vCHS support, I presently do not have the credentials I need so I'm stuck with the connector for now. More to come.....
      • UPDATE: After speaking with vCHS support, it sounds like my credentials are out of sync in one of their DBs and I was doing everything right. Now I wait for them to tell me it's fixed.
      • The solution came fairly quickly (less than 2 hours) and I had my vCHS node registered with VCCS.
If you have questions, please feel free to ask in comments. This was someone brief but it's because once it's working, it's really very straight-forward until/unless you try a stretch deploy (which I haven't).


Next will be an entry on vCHS networking.

My first experience with VMware vCHS - Part 1 - Setup and IPSEC

Prologue to vCHS Migration


I've decided to do a few articles regarding my migration to VMware vCHS (cloud hosting) solution and this is the 1st in the series.

I've had a few significant technical issues but the support has been as good as I can imagine. Last night I even met up with the specialty sales-rep assigned to my account and we spent nearly 3 hours talking over dinner about things in general and my project. VMware is not only doing more than I could hope to get me up and running but they're also looking at how they can help me through their "success stories" to help promote our business in a mutually beneficial manner which is wonderful because after 10 years of seeing 20%+ growth per year, we're likely going to explode this year leaving those gains in the dust and any help they provide will only strengthen our growth.

By the way, the "Customer Success Team" isn't a hollow catch-phrase. It reminds me of my experience as a consumer with American Express. They JUMP every time I need help. We have scheduled 2 hours calls (that we 3 hours) where they literally sat on the phone with me to walk through issues and wait for me to learn and question them AND did it with a wonderful attitude (sincerely eager to help).

In summary, the migration has been rough, but when do they go smooth? That said, I can't imagine a better infrastructure than they're giving me and their support is the best I've seen for any product/service anywhere save perhaps for high-end auto purchase experiences I've had.

I would HIGHLY recommend vCHS to anyone that needs a rock-solid, highly flexible hosting solution that covers everything from end-end.

At the moment we have 3 units of the VPS solution (they have dedicated and shared, VPS is the shared solution) which includes 15GHz dedicated CPU (you really can't talk cores or speed with the way they work), 30Ghz Burst, 60GB Ram (dedicated) and 2TB of SAN storage that is SSD cached and I'm constantly seeing 150MB-250MB/s with <2ms latency and my SQL Server is running as fast as it was locally with dual-quad xeons on a 8-disk RAID 10 over 15kSAS drives.

My first experience with VMware vCHS - Part1 - Setup and IPSEC


My present hosting provider hasn't been a pleasant experience. I seem to have some bad luck this way. My last experience was with RackSpace who has very good service in so many ways, but I found out the hard way that their security is severely lacking and when they make a mistake that cost me $10,000's, they had NO interest in taking responsibility and I'm not big on law suits so I dealt with the months (almost years) of heartache, moved on and tried Central Host (the hosting division of 8x8 which has since been bought up by Black Lotus.

My experience with 8x8 was a nightmare and I'm still feeling the pain as Black Lotus does their best to clean up the data center I'm in. They've also made amazing efforts to make things right with me but I'll leave that for my other blog. In the end, they offered me an incredible deal (~$2,000/mo for dedicated virtual hosts, 6-core HT, 3.5Ghz Xeon CPUs, 32GB ram, 2TB SATA RAID with 200GB SSD caching, 24 IPS and each of 2 hosts were at different data-centers) but they use App-Assure (Xen and/or Kvm) and I'm a Vmware guy interested in uptime and performance more than raw performance and didn't want to learn new systems either. VCenter is amazing and I know how to keep my business running with it so I went with vCHS. I also appreciate the way vCHS scales compared to typical virtual hosting solutions. I don't have to worry about "hosts".

The way the actual provisioning went was very straight forward. I got an email once the contract was signed and they told me provisioning had started. Once provisioning was done, I got an email with vCloud login and password setup links. I set my password, logged in, and headed over to the edge network configuration interface in the vCloud Director portal (the things they haven't made more simple and put into the vCHS portal and accessed through a full-blow vCloud Director instance which I prefer anyway not having really taken to the vSphere Web interface as it is).

Since I use pfSense (presently 2.1 and everything below assumes that) at all my other data-centers, I did a quick search and ended up using THIS guide as a reference to catch the nuances of their IPSec implementation (main/aggressive, which encryption he got working, etc..)

From there I went to my only gateway, edge gateway services, enabled VPN, setup the public IP, then added my tunnel (they combine phase 1 and phase 2 into a single UI). I quickly found that they use main and not aggressive which I should have caught from the tutorial I found. This is where I had my first issue. vCloud directory gateway status shows "System Alerts" with a red icon that was clickable and I wanted to see what it had to say so I clicked it and watched the entire UI refresh. I tried this a few times before giving up. I'm guessing it's a popup blocker issue at this point but since the directory comes up in a new window with no address bar and uses Flash (grr, WHY would they do that with such a new system???!!!) so there was no way to tell quickly and so I gave up there and went back to pfSense logs to see how things are going.

As so often is the case when doing IPSEC between pfSense and anything other than pfSense, I had to figure out what IPSEC standards they use. Here's what I found and I hope it saves someone some time. It's taken me years to get to a point where this doesn't become an all-night project:

If you know what you're doing and want to skim the settings, just know that even though vCHS asks for "Peer ID", it only supports Main mode which only supports IPs are IDs so Peer ID MUST be the remote gateway IP. This stinks for those of us trying to use more advanced methods to get around dynamic IPs. Really, "Peer ID" should read "Peer IP" and the only reason you have to provide both it AND "Peer IP" (which should read "Peer Gateway IP") is because they do support NAT-T.

Once I realized my mistake with the Peer ID, I had to delete the entry and create a new one then got an error about "Configuring Edge Gateway Services". I refreshed and it disappeared so I figured it was a fluke until I was seeing the same errors about IDs and when I check vCHS, the settings were back to the first entry so I deleted and tried again tripple-checking using the settings below and THEN I got a connection (I already had phase 2 setup. See below Phase 1 for settings). Once everything was working, my IPSec log had only 3 entries before a full connection (since there were so few options or advanced features being used).

Phase 1
  • Main Mode
  • Name: Anything descriptive you like
  • Description: More descriptive stuff you like
  • Enable this VPN: checked (default)
  • Establish VPN to: "a remote network"
  • Local Endpoint: (predefined external IP endpoint at the vCHS end)
  • Local ID: The vCHS public IP assigned to the local endpoint (pfsense: Remote Gateway and Peer identifier)
  • Peer ID: The IP of the remote gateway (pfsense: My identifier, My IP Address)
  • Peer IP: The IP of the remote gateway
  • Encryption protocol: AES-256 (pfsense: same for Phase 1 Encryption algorithm)
  • pfsense-Hash: SHA1
  • pfsense-DH Key Group: 2 (1024 bit)
  • pfsense-Lifetime: 28800
  • pfsense-Nat-T: Disable (if you need it, enable and update the Local ID on vCHS accordingly)

Phase 2 (pfsense)
  • Mode: Tunner IPv4
  • Local Network: Lan subnet
  • Remote Network: CIDR format for remote subnet (ex. 192.168.110.0/24)
  • Protocol: ESP
  • Encryption: AES 256 (they only appear to support AES at this time)
  • Hash: SHA1
  • PDF key group: 2 (1024bit) (they support off also)



Friday, February 28, 2014

Adventures in Node.js - Using NPM on Windows

The following still needs to be more carefully verified but I'm making this public now to help someone and hopefully do just that....


We'll call this installment #2 of "Adventures in Node.js" even though my previous Node entry didn't have this in mind.

I'm short on time today so I can't elaborate so much but in short:

One time on your machine:

  1. Install Python 2.7 and add the python.exe folder location to your PATH environment variable.
  2. Either 1 or 2 depending on your setup:
    1. If you don't have Visual Studio 2008 installed (which provides vcbuild.exe):
      1. install the Windows SDK (to provide vcbuild.exe). Good reference at https://github.com/TooTallNate/node-gyp/wiki/Visual-Studio-2010-Setup then ensure that the path to vcbuild.exe is in your PATH environment variable
      2. npm install -g socket.io --msvs_version=2012
    2. If you DO have VS installed, run the following to setup your environment properly:
      1. Add the path to vcbuild.exe to your PATH environment variable then run the following
      2. npm install -g node-gyp
      3. node-gyp configure --msvs_version=2012
      4. node-gyp build
      5. npm install -g socket.io --msvs_version=2012

Once for your project:
  1. npm install
Now your project should be ready for you to use npm in general. My purpose today was to check out grunt, grunt-cli and grunt-devtools thanks to recommendations from Paul Irish both at Google I/O 2013 and a couple weeks ago at the Chrome Dev Summit.

Now that I have npm working it was as simple as (from http://gruntjs.com/getting-started#preparing-a-new-grunt-project):
  1. npm install -g grunt-cli
  2. npm install grunt --save-dev
  3. npm install grunt-devtools
Next I try to setup and use Yeoman (another framework he suggested that relies on Grunt).

Tuesday, February 25, 2014

How We Can Begin to Fix the Government and Your Responsibility and Right as a Juror

One way we have to begin to get our government back in check 100% legally, without any violence

WARNING: If you continue reading, you will likely ruin your ability to serve on any jury that is well managed OR you will forfeit your ability to use this knowledge without being guilty of perjury. This is no joke and you better research this yourself because I'm not responsible for your actions, your interpretation of this information, or any consequences. On the other hand, it is YOUR RESPONSIBILITY as an American to know this.


YOU WERE WARNED

Scroll down to read....


































Pushed down so you must scroll to read and won't be exposed without that effort......



























*Jury Nullification*: Ultimately, it's the jurors RIGHT and RESPONSIBILITY to make a guilty judgement not ONLY based on the facts and law, but ALSO on what you think is right; thereby judging the law itself as well as the "crime". It is a mechanism very intentionally built into the core of our legal system.

This is not my interpretation either, this is our founding forefathers design:

John Adams wrote, “_It is not only the juror’s right, but his duty to find the verdict according to his own best understanding, judgment and conscience, though in direct opposition to the instruction of the court._”

Thomas Jefferson wrote, “_I consider trial by jury as the only anchor yet imagined by man by which a government can be held to the principles of its constitution._”

This video and the following link help explain but beware because jury selection includes questions worded very carefully to eliminate people who understand their full responsibility and power as a juror and those questions are typically worded in a way that if you use jury nullification, you will likely find yourself guilty of perjury. This is because the people who run the legal system as as susceptible to this power as criminals and your ignorance of it gives them tyrannical power. As long as they can keep you off a jury, they can manipulate the legal system rather than have it manipulated by the people who should be doing so, "THE PEOPLE".

The only real way to address this abuse is spread the knowledge so far that they cannot reasonably assemble an otherwise fair jury without allowing jurors who know their full responsibilities and rights as a juror.


Tuesday, February 11, 2014

How to KILL Performance in Chrome 34 dev channel with one CSS rule

I just accidentally discovered a sure-fire way to KILL performance in Chrome 34 dev. I mean, going from a decent 30-60 fps to 0.25-1 fps.

See link below for sample code.

On a page with a good number of input elements (like a management dashboard I'm working with), apply "border: dotted 1px rgba(200, 200, 200, 0.9);" to input elements. It's the combination of the INPUT, element, border-style:dotted and border-color:rgba(). Remove the dotted (use solid) or remove the alpha channel and use rgb() instead and performance returns.

Even after the page is fully rendered and I scroll a few times, which causes things to smooth out a bit, dev-tools is rendered useless.

A sample page can be found here: https://github.com/rainabba/miscCode/blob/master/Chrome34KillPerfUsingInputs

Thursday, January 30, 2014

Glass is NOT Augmented Reality and you Should Stop Comparing it to AR

First, let me say that I AM a Glass explorer and I'm also KickStarter backer #1 for Meta. I've also been working with just about every stereoscopic tech you can think of over the last 15 years (I'm a backer for Oculus Rift, was using LCD Shutter Glasses back in '97, used to DRAW stereoscopic images by hand as a child, wrote my own "Magic Eye" app in DOS 6 using qbasic at the age of 15, etc...)

I feel like I've been talking to walls the last year but here it is said better than I have been able to word it so far. The CEO of Meta (SpaceGlases, Meta-View) Meron Gribetz, the man that hired Steve Mann (a top authority on wearable computing) and Steven Feiner (credited with coining the term augmented reality), says in very simple terms that Glass is not AR so don't take my word for it. To understand how much of an authority Meta is on AR, make sure you know who Steve Mann and Steven Feiner are and then understand that the CTO for Meta has been working with Mann for more than 10 years.

"The 3D output allows you of course allows you to paint graphics on regions of interest in the real world; *TRUE* augmented reality"

"The modern definition of augmented reality; the ability to take digital information and register it to parts of the real world."

"Glass is a Notification Machine"

Glass is a HUD which can be manipulated to provide some VERY BASIC AR-LIKE behavior. That doesn't make it AR any more than a the Nintendo Virtual Boy was VR. That's not to call Glass inferior in any way. Glass just wasn't intended for AR applications. Eventually technology will get to a point where a real AR display like Meta will be useful for something like Glass (lightweight, unobtrusive, wireless, decent battery life; everything Meta is trying to improve right now but falls far short in compared to Glass). With its present design, Glass will never be real AR; no stereoscopy, no direct fov, no 3D tracking, almost nothing that would allow it to convincingly "augment reality" any more than your smartphone currently can. If you're going to call Glass an AR device, then so is ANY smartphone with a camera on the back of a screen. If anything, your phone is MORE appropriate for AR than Glass.


This is the man who will bring AR mainstream and who presently employs the two TOP authorities on AR. He validates what I've been saying since Glass was released. I hate to even mention Glass and AR in the same topic because I don't want to perpetuate the confusion but to clarify, I must do so.

Clarification: Based on a question just posed to me elsewhere, I came up with the following clarification.

The reason I'd more readily call a smartphone AR hardware is because of the fact that you can move the display in your line of sight and it will then (in general) have a camera pointed where your eyes do. Glass intentionally places the display above and to the side of your primary visions so immediately it cannot be in your direct field of view. It does have a camera facing forward with a huge FOV though so the display can be aligned with the camera. It might be a fine distinction, but that's why I say that it's isn't AR but has AR-like ability. You could call a car a bus because it can carry many passengers, but it's still not a bus, it's a car and the difference might seem insignificant, but only until you need a bus (something that does what the name implies). Furthermore, to "augment reality", you must have some reasonably level of belief that your reality has been augmented. As the Oculus Rift has proven, just putting a display in front of you does not accomplish this. Certain things like stereoscopic vision and latency are FAR more important than just having a display. A phone has a larger, higher resolution display that can easily take up a large portion of your primary FOV which is critical to AR. Add an auto-stereoscopic display (EVO 3D or Nintendo 3ds), and you'd also gain stereoscopic vision and with that, be much close to what AR needs to "augment reality" as opposed to just superimposing images into space. Glass has an interesting phenomenon where the display size is entirely dependant on what you see behind it. Because no stereoscopic cues are provided, the only way to make it even remotely believable is perfect scaling of the virtual image of the real one, something next to impossible with only the one camera. The Evo 3D has 2 cameras and so would be that much closer STILL to being a useful AR device but it falls far short in one way that Glass doesn't (not entirely anyway) and that is keeping your hands free so you can interact with reality. A major difference between superimposing graphics and AR is the fact that you not only could believe at some level that your reality has been augmented (not just projected over), but that you can still interact with your reality. That rules out the "phone as AR hardware" idea. So to sum it up, AR must make your reality appear to have changed (not just a projection) and you must be able to interact with that augmented reality. The primary difference between VR and AR is the fact that VR disconnects you from reality. Even with various hand-sensors, VR wouldn't be AR. Similarly, a HMD (head mounted display, like Glass and similar devices) will never be AR. The only real common thread is a see-through, head mounted display (HMD).

Followers