Saturday, January 29, 2011

How to configure Windows Server 2008 DHCP to supply unique subnet to a remote site?

The Main site hosts the only Windows Server. Windows Server 2008 R2 Domain Controller running AD, DNS, DHCP, Exchange 2007. Remote site has no Windows server.

Main site subnet is 192.168.1.0/24 Remote site subnet is 192.168.2.0/24

The Windows Server at Main site is supplying 192.168.1.0/24 via DHCP to hosts at the local site where it resides. Is it possible to configure that Windows Server to supply 192.168.2.0/24 to hosts at the Remote site and if so how?

We could use the Cisco router at the Remote site to supply DHCP but if possible we'd like to use the Windows Server at the Main site to supply DHCP.

  • No, not possible. As in: the remote site does not forward DHCP requests to he local site. This is becasue those are broadcast addresses which are NOT transmitted outside the Ethernet segment - i.e. they do not cross over the router.

    Yes, it is possible. You need to set up a DHCP relay system on the other side (can be part of the router) to forward DHCP requests to the Windows server. Then you set up a normal segment in the DHCP server.

    That said, the idea may be terrible. Problem is - whenever the link is down, and a computer gets online during this time, it ets no ip address and pretty muc hthe user needs to restart (unless you want to talk users through command line "ipconfig /renew"). DHCP has no concept (unlike IPv6 in general) for assigning addresses to computers post network activatio. Technically you would be better off to get a small servre and put it at the remote site. This can be a small ATOM based thing. This can serve as: * Local DHCP Server * Local Domain controller (same problem - link down, things get bad). * Local DNS server. * Possibly local file store, at leat for a special admin share so you have afast access to your tools.

    If you dont trust the remote site, using 2008 R2 yo ucan make the controller a RODC (Read Only Domain Controller). It sitll will stabilize operations.

    I would consider it bad practices to supply DHCP from your central site.

    caleban : I think the reason this whole idea came up is it seemed cheaper. It would be cheaper to use the single server at the main site than to set up a second server at the remote site i.e. purchase another license for Windows Server 2008 R2 and the client access licenses. 2008 R2 and the CALs for the remote site would be several thousand dollars.
    Stemen : But... how would that be cheaper than continuing to use DHCP on the Cisco router? Is something wrong with doing it that way? Personally, I'm in the middle of deploying a bunch of dhcp servers for a corporate VOIP system. Each server is running DHCPD on CentOS, on a PowerEdge box. Our priorities weren't cost, but reliability -- with failover enabled, we'll be able to server DHCP from either each machine in the field, or from a single server in our main datacenter.
    TomTom : Not cheaper - up to the moment you have a day or two off and people can not work because you were too cheap. I also would be another backup of the domain (how many domain controllers do you run?) You run a single server? Thought about the catastrphy cost or having to COMPLETELY REINSTALL ACTIVE DIRECTORY because you dont have a single backup unit? OUCH. I mean REALLY OUCH.
    TomTom : Costs for CAL - häh? Dont get me wrong, but either the remote sytems work against your server (so they already have a CAL), or they do not (then they dont need a CAL to access DHCP). ANY single server solution sounds like "i want a desaster" for me. Sometimes you can be TOO cheap.
    From TomTom

Server 2003 on domain wont let domain user have local profile

I have a few servers that are acting in this behavior, you log in and always get put into a temporary profile. The server is licensed for TS. The user I am testing with has local admin rights so it doesn't seem to be a permission issue on the server.

I'll first get a message that the users roaming profile cannot be found, even though we dont use roaming profiles. I then get another message immediately after saying a local profile could not be loaded, so it will only use a temp profile.

Any help would be greatly appreciated.

    1. Make sure their profile has been deleted and nothing exists at C:\Documents and Settings\%USERNAME% and C:\Documents and Settings\%USERNAME%.%DOMAN%
    2. Open up regedit
    3. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList
    4. Remove the key for the users with the problems. The key will be based on the users Security identifiers like S-1-5-21-3141592-6535897932-3846644798-1649.
      • You can look at the value for HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\S-1-5-21-3141592-6535897932-3846644798-1649\ProfileImagePath to help you figure out which profile is which.
    RobW : I did this, and now when i log on to the server, i get the normal log on screen.. fill in user name and password, and then the rdc connection ends w and w/o console switch applied as soon as i click the button.
    RobW : just tried another user as well, this user I get the same exact results as before. Trying to access roaming profile even though there isnt one, then cant find or create local profile either and logs on with temp.
    Zoredache : Hrm, that is unusual. Are you sure the user doesn't have roaming profile settings applied to their account? Can you check the eventlog after a login, there should be some errors being logged. I do know the above helped fix a situation that I was having that I believe was similar.
    RobW : I get the following errors in the event log (user and server name scrambled):
    RobW : Event Type: Error Event Source: Userenv Event Category: None Event ID: 1521 Date: 3/10/2010 Time: 1:14:01 PM User: zxzbz\zcbzcbzb Computer: zccb-zcbz-zcbzc Description: Windows cannot locate the server copy of your roaming profile and is attempting to log you on with your local profile. Changes to the profile will not be copied to the server when you logoff. Possible causes of this error include network problems or insufficient security rights. DETAIL - The network name cannot be found.
    RobW : Event Type: Error Event Source: Userenv Event Category: None Event ID: 1511 Date: 3/10/2010 Time: 1:14:04 PM User: czbcz\zcbzcb Computer: zcbczbzcbzcbzb Description: Windows cannot find the local profile and is logging you on with a temporary profile. Changes you make to this profile will be lost when you log off.
    RobW : No roaming profile is designated for the user in AD
    From Zoredache
  • Check your TS GPO for a roaming profile setting. The setting is at:

    Computer Configuration>Administrative Templates>Windows Components>Terminal Services>Set path for TS Roaming Profiles

    From joeqwerty
  • Go into the Group Policy Editor for the machine (gpedit.msc) Computer Config > Admin Templates > System > User Profiles > Only Allow Local Profile - Enable.

    This has been an outstanding issue and headache of mine for months... Yet a stupid fix, hopefully it helps someone else out!

    From RobW

How does java permgen relate to code size

I've been reading a lot about java memory management, garbage collecting et al and I'm trying to find the best settings for my limited memory (1.7g on a small ec2 instance) I'm wondering if there is a direct correlation between my code size and the permgen setting. According to sun:

The permanent generation is special because it holds data needed by the virtual machine to describe objects that do not have an equivalence at the Java language level. For example objects describing classes and methods are stored in the permanent generation.

To me this means that it's literally storing my class def'ns etc... Does this mean there is a direct correlation between my compiled code size and the permgen I should be setting? My whole app is about 40mb and i noticed we're using 256mb permgen. I'm thinking maybe we're using memory that could be better allocated to dynamic code like object instances etc...

  • Sun says that the permgen is storage for objects that don't have an equivalence in the Java language:

    A third generation closely related to the tenured generation is the permanent generation. The permanent generation is special because it holds data needed by the virtual machine to describe objects that do not have an equivalence at the Java language level. For example objects describing classes and methods are stored in the permanent generation.

    So yeah, the permanent generation contains "internal" JVM data structures that describe objects and methods, among other things. 256 MB might be rather large, but it would depend on the application.

    In a large ColdFusion app like I run, a huge amount of classes are created for ColdFusion code that is compiled on-the-fly, and I am running with a permanent generation set to 192 MB. I think I used to run it successfully at 128 MB as well, so it's definitely something you could experiment with.

    If you're measuring performance under load before and after you tune these GC parameters, you'll be able to tell what impact your changes have on your application's performance.

    Also, the document I linked gives a lot of information about how the JVM manages it's memory- it's quite good reading.

    brad : ya that's where i got my quote also. I just wasn't sure if there's extra stuff going in that perm gen aside from my own code. I literally have 45mb of compiled code. And the GC logs show permgen usually hovers around 50mb
    Clint Miller : Shrink it, then. Why not? Try 128 MB and let it run for a while.
  • Is your Tomcat configured to automatically redeploy applications that are dropped in the application folder? Do you use that functionality?

    If you use that functionality, there's a possibility that the amount of memory used in the PermGen space is gonna increase. This can happen due to (badly written) applications that in some way keep references to the loaded classes alive. Each time there's a redeploy, the amount of memory used in the PermGen space is gonna increase until it overflows.

    If you don't use that functionality and always have the same application running on the Tomcat server, then I would just try running Tomcat with the default settings for PermGen space. If the application loads, and runs fine for awhile, then it should be fine. If the application runs out of PermGen space, then just increase it in steps until the PermGen space is big enough.

    Why has it been configured to 256m (as seen in your other question) in the first place?

    Btw, yes, there is a correlation between the amount of loaded classes and the amount of needed space in the PermGen area. So, yes, the more code that is loaded, the more PermGen space you're gonna need.

    brad : 256 is what it was configured at before I came in so I'm not sure why it was chosen. We actually stop tomcat and restart on each deploy and it only runs one app so I don't think we'd have that problem of it keeping old apps in memory. In the logs I've never seen it go above 50mb so I've set it to 80mb just to give a bit of a buffer.
    From rubenvdg

java max heap size, how much is too much

I'm having issues with a JRuby (rails) app running in tomcat. Occasionally page requests can take up to a minute to return (even though the rails logs processed the request in seconds so it's obviously a tomcat issue).

I'm wondering what settings are optimal for the java heap size. I know there's no definitive answer, but I thought maybe someone could comment on my setup.

I'm on a small EC2 instance which has 1.7g ram. I have the following JAVA_OPTS:

-Xmx1536m -Xms256m -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled

My first thought is that Xmx is too high. If I only have 1.7gb and I allocated 1.5gb to java, i feel like I'll get a lot of paging. Typically my java process shows (in top) 1.1g res memory and 2g virtual.

I also read somewhere that setting the Xms and Xmx to the same size will help as it eliminates time spend on memory allocation.

I'm not a java person but I've been tasked with figuring out this problem and I'm trying to find out where to start. Any tips are greatly appreciated!!

update
I've started analyzing the garbage collection dumps using -XX:+PrintGCDetails

When i notice these occasional long load times, the gc logs go nuts. the last one I did (which took 25s to complete) I had gc log lines such as:

1720.267: [GC 1720.267: [DefNew: 27712K->16K(31104K), 0.0068020 secs] 281792K->254096K(444112K), 0.0069440 secs]
1720.294: [GC 1720.294: [DefNew: 27728K->0K(31104K), 0.0343340 secs] 281808K->254080K(444112K), 0.0344910 secs]

about 300 of them on a single request!!! Now, I don't totally understand why it's always GC'ng from ~28m down to 0 over and over.

  • Part of your problem is that you are probably starving all other processes for ram. My general rule of thumb for -Xms and -Xmx are as follows:

    -Xms : <System_Memory>*.5
    -Xmx : <System_Memeory>*.75

    So on a 4GB systems it would be: -Xms2048m -Xmx3072m, and in your case I would go with -Xms896m -Xmx1344

    From Zypher
  • While I haven't run any JRuby apps on Tomcat, I have run ColdFusion apps on varied J2EE app servers, and I also have had similar issues.

    In these FAQs, you'll see that SOracle says that on 32-bit Windows, you'll be limited to a max heap size of 1.4 to 1.6 GB. I never was able to get it stable that high, and I suspect you're running a similar configuration.

    My guess is that your requests are taking a long time to run b/c with a heap size that high, the JVM has allocated more physical memory than Windows had to give, and thus Windows spends a lot of time swapping pages in and out of memory to disk so it can provide the required amount of memory to the JVM.

    My recommendation, although counter-intuitive, would be that you actually lower the max heap size to somewhere around 1.2 GB. You can raise the min size as well, if you notice that there are slow-downs in the app's request processing while the JVM has to ask Windows for more memory to increase the size of its heap as it fills with uncollected objects.

    brad : i'm pretty sure you're right. Although we're running Linux (not windows), I'm pretty sure that the machine is swapping too much and having a hard time garbage collecting.
  • hi,

    in addiotion to the previous answers, you should also take PermGen into account. PermGen is not part of the heapspace. with your current configuration your java process could sum up to 1792mb which is the total amount of your machine.

    brad : ya i just read about that also, we're definitely starving the system with the combination of the perm and max heap
    Clint Miller : Oh yeah- that's a good point too. Do you have a reference for PermGen not being part of the heapspace?
    Christian : this post on stackoverflow explains the memory model and also ahs a link to a sun blog with an explanation of the PermGen: http://stackoverflow.com/questions/2129044/java-heap-terminology-young-old-and-permanent-generations you also see it in the gc log (when enabled): `0.431: [Full GC [PSYoungGen: 352K->0K(101952K)] [PSOldGen: 0K->330K(932096K)] 352K->330K(1034048K) [PSPermGen: 3959K->3959K(16384K)], 0.0187660 secs]` you can see that the PermGen is handled separately.
    From Christian
  • I know there's already an answer chosen, but still, here goes my explanation.

    First of all, in the commandline you use, you already reserve 1536 megabyte for the Java heap (-Xmx1536m) and 256 megabyte for the PermGen (-XX:MaxPermSize=256m). The PermGen is allocated separately from the Java heap and is used for storing the Java classes loaded in the JVM.

    These 2 areas together already add up to 1792 megabyte of RAM.

    But in addition to that, there is also RAM needed to load the JVM itself (the native code of the JVM) and RAM to store the code that is generated by the JIT compiler.

    I suspect all those add up to the 2 gigabyte virtual that you mentioned.

    Finally, you also have to take into account the other things that are running on the server and that need RAM too. You didn't really mention how much swap is in use on the server. This would tell you whether the machine is swapping and that is causing the application to react slowly. You should at all times prevent the JVM from hitting the swap. It's much much better to frequently trigger the garbage collector, than to allocate too much heap and have part of the Java heap being swapped out.

    From rubenvdg

What email server should I choose?

I need a secure email server installed in debian-lenny with users in a mysql table.
Also users are from multiple domains.
Quota should be in mysql or a global variable for all users.
What are my options ?
THanks in advance for your help.

  • I pretty much followed this guide: http://www.howtoforge.com/virtual-users-domains-postfix-courier-mysql-squirrelmail-debian-lenny

    Does what you require.

  • I run the following software on Debian Etch without any problems:

    • Postfix
    • Potstfixadmin
    • MySQL
    • Dovecot (courier etc will probably also work, but Iv'e found dovecot to be much slimmer on resources and easier to set up)

    Gives you virtual users and domains, vacation messages, imap(s), pop3(s), web-based user/alias/domain managment etc. Highly recommended.

    Edit: Here's a tutorial aswell - http://bliki.rimuhosting.com/space/knowledgebase/linux/mail/postfixadmin+on+debian+sarge

    From pauska
  • Although I like installing and configuring the different components of a complete mail system myself, you might be interested in iRedMail which is basically a script installing and configuring every needed component for you. It also brings a web interface for configuration.

    From joschi
  • postfix-policyd for your rate limiting/throttling

    http://packages.debian.org/lenny/postfix-policyd http://policyd.sourceforge.net/

  • I'm a huge QMAIL fan. It hasn't had a security issue ever. The toaster makes it easier to throw a box up if you're less familiar with it.

    Scott Lundberg : Ditto for me. There are some great tutorials that can be found at qmail.org in the documentation section. Also, Life With Qmail, by Dave Sill http://lifewithqmail.org/ is a great resource.
    pauska : I like Qmail aswell, but the patching of code (wich violates qmail's security guarantee) got to me, so I changed to Postfix.
    From Warner
  • Courier has the complete suite (MTA, POP3, IMAP) that all can share the same authentication-system (single configuration for validating against a database). I'm not sure about quota though.

    If you mix systems (psotfix + courier for example), you will need to setup them indvidually for the same database-schema.

    From jishi
  • Use Postfix, with virtual-domains.

    From eternal1

Feasibility of Windows Server 2008 DFS replication over WAN link

We have just set up a WAN link that connects two buildings in our organisation. The link is provided by a 100-Mbps point to point line. We have a Windows Server 2008 R2 domain controller on each side of the link.

Now we are planning to set up DFS for file services across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. My idea is to set up a file server in each building and install DFS so that all the contents stay replicated over the 100-Mbps link. I hope that this will ensure that any user will be directed to the closest (and fastest) server when requesting a file from the DFS folders.

My concern is whether a 100-Mbps WAN link is good enough to guarantee DFS replication. I've no experience with DFS, so any solid advice is welcome. The line is reliable (i.e. it doesn't crash often) and our data transfer tests show that a 5 MB/sec transfer rate is easily achieved. This is approximately 40% of the nominal bandwidth.

I am also concerned about the latency. I mean, how long will users need to wait to see one change on one side of the link after the change has been made on the other side.

My questions are: Is this link between networks a reliable infrastructure on which to set up DFS replication? What latency times would be typical (seconds, minutes, hours, days)? Would you recommend that we go for DFS in this scenario, or is there a better alternative? Many thanks.

  • It should work well on your link. We do it across much slower links and it works. We configure the replication so the highest bandwidth is used during off hours. I assumed you meant the new DFSR rather than the older version.

    CesarGon : Thanks Dave. What's the latency? I mean, how much do you typically need to wait to see an update on one side after a change has been made on the other side?
    TomTom : Depends on the change. The change is analysed and compressed, then queued for transfer. Normally (small files, free bandwidth) not more than 30 seconds or so. Dump a DVD in and that one will take some minutes, because compressing it first takes time, as does decompressing on the receiving end.
    CesarGon : Understood. Thanks for clarifying.
    From Dave M
  • DFS... like the old or the new replication mechanism?

    The old one - not feasible at all, not evne over LAN link... i twas unreliable for larger scenarios.

    The new one (DFS Replication) - yes, sure. Works perfectly. It is very reliable, it will queue as it needs. As long as your link has enough bandwith overall tings will eventually work. I am keeping up a number of links over 512kbit and sometimes queue 20gb for transfer.... Takes some days, but it works.

    CesarGon : Well, we are using Windows Server 2008 R2, as I say in the original post, so I am talking about the "new mechanism" I guess. :-)
    CesarGon : When you say that it takes some days, do you mean that a change made on one side of the link would take *days* before it is relpicated to the other side? That would make DFS replication unsuitable for us...
    TomTom : What I said is that transferring MY 20gb change over MY 512kb link takes some days. now think about why ;) naturally if you change more data than the pipe can handle in reasonable time - with compression - no replication software will magically make it appear. If you need to transfer 2000gb daily, 100mbit may simlpy not be enough. Depends how many changes yo uhave. I merely showed reliability over small pipes with tons of data.
    CesarGon : I understand. Many thanks for the clarification.
    TomTom : Just as end note - check whether it works for you. if it does NOT - it is not the technology, it is either the requirements OR... the link is simply not enough. DFS replication is very well made. Acutally... update your sysvol replication to it ;)
    tony roth : not sure if this has been mentioned but you can use windows backup to prestage a local server then move the entire server or just drives to the remote site.
    From TomTom
  • Is there a chance that the same file will be edited by two different users simultaneously on the two replicas? DFS doesn't provide a distributed locking mechanism to protect from this.

    CesarGon : Yes, that scenario would be possible unless we explicitly prevent it. Thanks for pointing this out. How is this limitation usually addressed?
    Fred : Sadly, there's no silver bullet. Depending on geographic locations of your users, it might be that their work hours don't overlap and thus edits won't collide. Can you carve up the data set so that files are located close to the users who need them the most? That way you have a single copy of the data and normal Windows locking helps with the collision issue. If you really want simultaneous bi-directional replication, DFS-R may not suffice for your use-case.
    CesarGon : All users are within the same time zone, unfortunately. I could split the data set into two halves and put each half closest to the users who need it the most, and replicate it to the other side in a read-only fashion. Is that a good solution?
    From Fred
  • Hi, I'd be very interested in how you found this to perform with large data sizes. I'm looking for a similar solution (with approx 10TB & 100MB L2 links) to replicate data from the East Coast to Europe. I've looked at several solutions but if it works as it should, the 'free' option (DFSR) should be good enough...I think. Or does anyone know a better product (ideally one that in some way does solve the problem of two simultaneous edits)? Thanks!

    CesarGon : Hi Matt. We will be implementing DFSR across the 100 MB link shortly. I will let you know how it goes.
    From Matt

Setting Default Printers in Login Script?

I've got a login script setup, that removes all old printers, and then adds the current set of network printers.

CODE

Set WSHPrinters = WSHNetwork.EnumPrinterConnections
For LOOP_COUNTER = 0 To WSHPrinters.Count - 1 Step 2
  If Left(WSHPrinters.Item(LOOP_COUNTER +1),2) = "\\" Then
    WSHNetwork.RemovePrinterConnection WSHPrinters.Item(LOOP_COUNTER +1),True,True
  End If
Next

'Install Network Printers
WSHNetwork.AddWindowsPrinterConnection "\\SERVER\PRINTER1"
WSHNetwork.AddWindowsPrinterConnection "\\SERVER\PRINTER2"
WSHNetwork.AddWindowsPrinterConnection "\\SERVER\PRINTER3"
WSHNetwork.AddWindowsPrinterConnection "\\SERVER\PRINTER4"

This is fine, but seems to reset the current default printer on the users machine.

Is there a way to preserve the current default printer on the users machine?

Is this the most sensible way to confer network printers for users upon login? Or are there alternative or better ways to do so?

Any help is very much appreciated.

  • Why use a script at all?

    I roll out printer configurations (and mapped drives) using the client extensions of the group policy mechanism (that was introduced some long time ago and is part of all windows updates since years).

    Roy : Can you be more specific? Can you please link to a guide? Thanks
    TomTom : http://blog.mpecsinc.ca/2008/12/sbs-2008-group-policy-client-side.html
    Ian Bowes : Are the Client Side Extensions of GP only available through Windows Server 2008? We're running Windows Server 2003 SP2 at the moment. I'd be very interested in taking the setup of Printers and Drive mappings out of a login script and into some sort of formal Group Policy, but I haven't any clue how to do it, or where to start.
    TomTom : They should be available in 2003 / XP. Separate downloads, though (as in: if your systems are fully patched, they should be there).
    From TomTom
  • Last time I scripted that, I added a group for each printer in AD - then added the user to whatever printer group was suppose to be his or her default - and in the login script checked for this group membership, setting the appropriate default.

    Obviously, that environment was pretty fixed so this was easy to determine - putting the burden on setting a default printer on the templates instead of the poor user (who could still temporarily change it manually when needed). A more obvious approach might be to check what the default printer is before removing the printers, and then (if that printer still exists after your script) re-apply the default printer setting.

    But as TomTom writes, these days printers can be connected using group policies - and then you shouldn't experience any of your mentioned problems anyway.

    I also recall doing a registry dump of the Printers registry key and then just importing it being stupendously fast, if you've got the possibility to freeze system configurations (like on a TS) it's quite fun, though not very maintainable ;)

    Ian Bowes : I've since modified the login script itself to do exactly what you suggested. However, we have a number of users who have local printers that will be set as default, and presumably the login script will rejig these around. I've read that removing network printers and adding them again is a good practice because it prevents some problems with multiple SPOOLSS, but if there was a way to circumvent this seemingly arcane practice in login scripts, and manage it with GP I'd be very interested to hear about how to go about it, or be pointed in the right direction.
  • I use an old exe that came in the WinNT resource kit called con2prt.exe.

    The best way to call it would be from your VBS login script as follows:

    'Mapping printers needed by everyone
    Set WSHShell = CreateObject("Wscript.Shell")
    WSHShell.Run ("\\SERVER\SYSVOL\SERVER.local\scripts\map_printers.bat")
    

    And the Map_Printers.bat should contain

    :: Map Printers
    : HP 1600
    \\SERVER\SYSVOL\server.local\scripts\con2prt.exe /cd \\SERVER\HP1600
    :: Ricoh Aficio 2035e 
    \\SERVER\SYSVOL\server.local\scripts\con2prt.exe /c \\SERVER\RICOH2035
    :: Samsung ML-2010
    \\SERVER\SYSVOL\server.local\scripts\con2prt.exe /c \\SERVER\SamsML2010
    :: HP BusinessInket 2230
    \\SERVER\SYSVOL\server.local\scripts\con2prt.exe /c \\SERVER\HP2230
    

    The /cd means set deafult.

    You can find out all the commands by running con2prt.exe /?

    Also - you can download here : http://www.paulmcgrath.net/download.php?view.2

Dell Poweredge 2650 RAM Upgrade

Hi,

I bought an old Dell Poweredge 2650 off ebay to use as a dev server, but I need to get more RAM for it. From what I understand there were 2 versions of the 2650 released, with the older version only supporting PC1600 RAM (200MHz) and the 2nd version supporting PC2400. Unfortunately Dell thought it wasn't necessary to label their servers as such so I can't tell what version I have.

Does anyone know if there is a way to tell without buying and testing RAM in the server?

  • If it runs, you could check with SIW for windows, or lshw on linux

    Otaku Coder : Thanks, I'll remember that for future reference, but unfortunately I don't have any DDR RAM to boot the server up and test it.
    Warner : dmidecode would probably be more appropriate in Linux.
  • If you enter the support section of the Dell website, and enter the server's service tag, you can then click on the link 'System Configuration' listed under the 'product support' section.

    When I do this with my Dell servers, it tells me the speed of the memory supplied with the system.

    I've tried to give you a hyperlink for 'System Configuration', but it seems to contain lots of session data, if I attempt to remove it, I get redirected to an error page.

    Edit: Try this link, it might work.

    Dave M : +1 Dell support is usually great for this. Servcie Tag gives a wealth of info.
    From Bryan
  • Thanks Bryan, but this is what the wonderful Dell Service Tag lookup yields:

    ... 1GB ECC DDR MEMORY, (2X512MB) ...

    I guess I just get some DDR ECC RAM and 'plug and pray' to see if it works!

  • Not an expert on RAM, but Kingston.com just shows one option for RAM. Personally, I'd get a couple of PC2400 sticks and try it - if there aren't any pin differences between the two, it's probably just a speed difference, and the faster memory should work at the slower speeds.

    From chris
  • Here are the memory specs for the PE2650:

    Memory

    Architecture 72-bit ECC PC-1600 DDR SDRAM DIMMs, with 2-way interleaving

    Memory module sockets six 72-bit wide 184-pin DIMM sockets

    Memory module capacities 128-, 256-, 512 MB, or 1-GB registered SDRAM DIMMs, rated for 200-MHz DDR operation

    Minimum RAM 256 MB

    Maximum RAM 6 GB

    And here's the user manual section on installing\adding memory:

    http://support.dell.com/support/edocs/systems/pe2650/en/it/5g375c60.htm#1070776

    From joeqwerty

Subdomains and SSL > How can I provide SSL on *.domain.com and also at the root.

I know that you can provide SSL at any subdomain with a wildcard SSL cert, but how can you do that and also have SSL at the root (ie, when somebody just types https://example.com/ without the www)? Would I just install the wild card cert, and a second cert for handling root :443 requests? I can't use mod_rewrite because the browser won't get that far before alerting the user of the lack of an SSL.

  • You need 2 certificates for this to work I'm afraid.

    : AFAIK, you can only use 1 certificate per ip-address.
    grawity : vorik: "TLS Server Name Indication"
    From Joachim
  • One certificate with all the domains described in the 'X509v3 Subject Alternative Name' attribute may do the job. Most modern web browsers support this AFAIK, though I am not sure if the well-known commercial CAs do issue such certificates.

    grawity : http://wiki.cacert.org/VhostTaskForce
  • I'm using mod_rewrite for this purpose just fine, I redirect request from https://domain.com/application/ to https://www.domain.com/application/ using the following rules:

    RewriteEngine On
    
    # Use correct hostname
    RewriteCond %{HTTP_HOST} ^domain\.com$
    RewriteRule ^(.*)$ https://www.domain.com/$1 [R=301, L]
    

    So you'd just need one wildcard SSL certificate.

    orokusaki : @Matthias Vance How does that work. If the user reaches `https://mydomain.com` first, there browser will try to connect via SSL and fail before a redirect can be issued, right?
    Matthias Vance : I tested this on one of our systems, but that happens to have a root and "www." certificate. So, the test is flawed, but I think it's still worth trying out out, because the SSL certificate will get sent, no matter what hostname you use (over SSL). So the browser should get the redirect just fine.
  • Many CAs (including Comodo, and DigiCert) will include the base domain name as a free SAN in their wildcard certificates: http://www.sslshopper.com/ssl-certificate-comparison.html?ids=26,13,45

    So you could use the one wildcard certificate to secure domain.com and anything.domain.com. That way you don't get any errors, but you still might want to redirect them to www.

    From Robert

What is the fastest and safest RAID combination for SATA drives?

I wonder what is the fastest and safest RAID combination for SATA drives and general use (some write, mostly read)?

RAID 0 is fast but utterly unsafe, RAID 1 is safe but slow, RAID 5 is safe but not so fast, especially on the cheap controllers (XOR calculations).

It seem that RAID 1+0 or RAID 10 is the best combination. You get mirroring for safety and mirroring for speed. Are there any other best or more optimal combinations? The only drawback of the RAID 10 is inefficient storage utilization.

  • There pretty much is nothing better than RAID 10 for speed. Point - because you get write decoupling. Any more efficient RAID (5, 6) has a bottleneck in writing that is higher than RAID 10.

    That said, you MAY get away replacing a RAID 10 normal dsics with a RAID 5 or RAID 6 based on SSD's - which may not be that much more expensive thanks to the need to have less discs.

    Raid 5 gets unsafe with too large / too many discs - in this case you need to go Raid 6. Problem is that if a disc fails in Raid 5.... at a certain point you are more or less likely to get a second disc failure DURING THE REBUILD, at which point the Raid fails. The limit is currently seen around 2gb discs, so more relevant for archive setups. Raid 6 solves that for now.

    Personally I currently go Raid 5/6 for storage, file servers. Raid 10 for virthal server operating system discs (but then I ahve like 6-10 platters and run 40 or so servers off that - if they all boot, that is pretty much disc hell) an RAID 10 for some database data areas.

    Another thing to look at is the discs you use. higher IOPS are better. Cheap would be normal SATA discs, high end are 15000 RPM SAS discs that cost a fortune. The Western Digical Velociraptor 2.5" enterprise version is a good medium ground - 300gb per disc, 10000RPM. About double the IO of a standard SATA disc, but a LOT cheaper than SAS high end discs. But then, a RAID 5 of SSD's soonish kills those in performance AND price... because you need less.

    As andol said, it all depends on your needs. What you try to do?

    And finally - this is not SATA depending at all. Actualy thanks to SAS interoperability with SATA you can plug any SATA drive into a SAS backbone (they are compatible - even physically) and use the SAS infrastructure.

    ptman : "The limit is currently seen around 2gb discs" - do you mean 2TB?
    TomTom : sorry, yes- 2tb.
    Bart Silverstrim : I second the RAID 5 issue with unrecoverable read errors. It SUCKS to discover that two hours into a rebuild and having to start with a fresh recovery from backup! ARGH! And while RAID 6/10 solves that, I've read some grumblings that as data needs continue to increase, those will soon have issues too. Data density just keep jumping higher and with it comes more issues with data integrity and reliability.
    TomTom : Yes. This is what many people overlook - the moment a rebuild starts, the other discs are under stress. Perfect time for them to start showing "issues" ;)
    From TomTom
  • It also depends upon the number of drives : with 4 drives, go for RAID-10. With more than 8 drives, RAID-6 will probably be fast enough with a good RAID controller (3Ware, Areca, Intel 52xxx series). Here are the numbers :

    • 4 x 1TB, RAID 10 : 2TB available space, 180 MB/s write, 190 MB/s read
    • 8 x 1TB, RAID 10 : 4TB available space, 360 MB/s write, 400 MBs read
    • 8 x 1TB, RAID-5 (dangerous): 7 TB available, 420 MB/s write, 440 MB/s read (3Ware)
    • 8 x 1TB, RAID-6 : 6 TB available, 240 MB/s write, 360 MB/s read (3Ware)
    • 16 x 1TB, RAID-6 : 14TB available, 280 MB/s write, 700 MB/s read (3Ware)

    As you can see, with about 8 drives RAID 5 and RAID 6 are quite competitive in sequential performance with RAID-10 (not so with a shitty card such as Promise, etc). Write performance is quite limited in RAID-6, though tolerable given enough drives.

    With big drives, RAID-5 is relatively unsafe because of the long time (3 to 4 hours, up to 7 to 8 hours) necessary for rebuilding. You may go to RAID-5 with 6 or 8 drives though, but you must stop all write operations in case of a drive failure until the array is rebuilt. This way it's "safe enough".

    Also, don't use desktop drives in a RAID array with more than 4 drives. Vibrations and read errors will kill performance.

    Bart Silverstrim : RAID 5 is also unsafe because of unrecoverable read errors...
    wazoox : I've set up several hundred servers using RAID-5 and RAID-6, and unrecoverable RAID errors are extremely rare, sufficiently so not to be a serious trouble.
    TomTom : Sucky drives. 4x300gb, RAID 10, 600gb available - 500mb/s COPY (read+write same time, same drive set). Adaptec 5805 and... Velociratpros (enterprise edition, not the ones mounted in 3.5" coolers that dont do anything). Never looked back ;) I second the warning for desktop drives - the bearings are different on enterprise drives, made for more vibrations (which are common when you pack up 10 or 20 or more drives in a cage).
    From wazoox

RAID for a SCM server

I need to buy/build a server to host our Subversion repository (FYI: I am a dev/not an IT guy). Obviously this is mission critical, and needs to have high network and disk i/o performance. Our repository is currently 5GB and we support 20 devs. The server was going to be Windows 2008, but Linux is an option if it is a compelling and simpler/easier solution.

CLARIFICATION: The 5GB repository is about 2GB source, and yes, it needs to handle 20 devs doing multiple small commits, logs, histories, and checkouts all day long. (How do I clarify source commits? A few C# files here and there, with a few lines of changes? Pretty standard stuff.)

UPDATE: Budget: I was hoping to get by with $2,000 or less, only because I don't think we need to spend that much. However, if it takes $5,000, then that is what it takes. This is our LIFE. But if $2500 gets 100% and $5000 gets 103%, it isn't worth the extra money.

My first priority, of course, is data integrity. If a drive fails, I want to have the machine stop writes and be able to put a new drive in quickly to have the machine back up and running as fast as possible. (I can deal with a few hours of downtime, but not a few hours of "work" during the downtime).

I don't think I need (or want) RAID 5, as the rebuild cost seems to high/complicated.

At a minimum, I could use RAID 1, and have a backup disk (clearly one not from the same batch or even maker ;-)

RAID 1+0 looks like it might be faster? Is it worth the complexity?

Can someone point me to some suggestions and best practices for managing a RAID drive, in particular, whatever solution is offered, how do I manage the disk failure. Is there software that can notify me (email/pager) if a drive dies? Software that will prevent writes to the disk at that point?

Any other things I need to think of?

UPDATE: My Question is this: What are the advantages between hardware RAID vs Windows Server 2008 software RAID for RAID 1+0 wrt speed, management (of a dead disk) and alerts of disk failure.

Thanks

  • Your repository is 5GB, but what is the frequency of your commits / updates and the rough size of those?

    How much money are we working with here? This really is the most important first question you should ask yourself.

    RAID 1 or 1+0 with either 1 or 2 hot spares would be ideal I am thinking, this way if a drive does fail, the raid card will automatically begin rebuilding the RAID using the hot spare drive. You would then just buy a new drive to match the ones you have in there, and replace the bad one with that.

    : Updated with budget and commit information.
    : Hardware or software? Why one or the other? What about notification and controlling of what happens during a drive failure?
    From Zero0ne
  • If a drive fails, I want to have the machine stop writes and be able to put a new drive in

    RAID controllers typically don't operate like this. If a drive fails, the controller marks the array as degraded, and continues to let the array operate (but at a lower speed as it needs to do more error handling).

    I don't think I need (or want) RAID 5, as the rebuild cost seems to high/complicated.

    Generally RAID 5 and 6 are perfectly valid choices, the rebuild cost is rarely incurred. It's worse that the write performance of RAID 5/6 can be rather low.

    I could use RAID 1

    For 20 users, with decent disks I guess this would be fine.

    RAID 1+0 looks like it might be faster? Is it worth the complexity?

    Yes, RAID 1+0 is faster, and does not have any significant additional complexity -- this is one of the most frequently used RAID levels and all good controllers have a mature implementation of this. In a perfect world, a 4-disk RAID 1+0 could have 4x the read performance and 2x the write performance of a single drive. One thing though, costs goes up as you need at least 4 drives, and effective storage size relative to the number of drives used is not too great.

    how do I manage the disk failure. Is there software that can notify me (email/pager)

    Comes with the controller if you buy a decent one; you just have to install the management software and set it up for email notifications. Additionally you can put a hot-spare drive on the controller, so that it will rebuild right away (note that performance goes down during rebuild).

    3 tips:

    • Measure your current disk I/O pattern and performance needs on your existing server (perfmon etc). Don't go overboard on RAID if your actual disk I/O doesn't isn't that high. 20 users is not much, but of course Subversion may need more disk I/O than one would think.
    • Buy a name-brand server (Dell, HP, IBM, etc), don't DYI. It is almost never worth it for a generic standard server.
    • Remember, RAID != backup. You seem a little fixated on the disk failure scenario -- RAID provides you with a higher uptime for the server and more disk I/O, but you still need proper backups.
    3dinfluence : +1 for mentioning the backup thing....was about to post another answer just calling out the fact that this question seems to be a RAID in the place of backups.
    : 1. We are currently using an off site repository, so we don't have anything to measure with (until we get something in house) 2. I am sold on COTS hardware. It is fully baked 3. Yes, this is not backup, just an ability to recover quickly from a disaster.
    : I am still confused about software vs hardware raid and the software to manage it. Is there a disadvantage to using a SATA card, and Windows Server 2008 software to manage it? Or should I use a hardware card and it's software? What are the pros and cons of each?
    Chris S : Software raids tend to require bringing the server down to replace a disk. Hardware raids from Dell or HP don't. I'd recommend something like a HP ML310 with SC40Ge, a pair of SAS HDs in RAID 1, and Server 2008 R2 Std Ed. It'll cost about $2500 and last you for at least 3 years.
    Jesper Mortensen : @teleball: "Software RAID" (by which I mean something the OS handles) is cheap and pretty good. Downsides are as Chris S writes: a) RAID'ing the OS boot partition is very hard / impossible, b) recovering after disk failure requires a reboot, c) the management apps are typically more thought of as 'local' to the PC, fx. drive failure notifications are logged to syslog but the management apps can't send them as emails. If you have 20 devs working on this, then their salary totally dominates the HW cost -- get a small name-brand server with a 'real' hardware RAID controller.
  • I would recommend a hardware RAID 1+0 setup. This will give you good performance and redundancy/failure tolerance at the expense of costing a bit more (more drives are required vs. RAID 5).

    A mirrored RAID volume has 2 copies of all of the data, so if a drive fails you still have an accessible copy. You don't need to block disk access on a drive failure. You can configure your system with "hot spare" drives that sit unused until a drive failure occurs, then spring to life and automagically take the place of the failed drive. This should give you a fully-functional RAID volume that can tolerate another drive failure and buy you enough time to replace the failed drive. In order for the RAID 1+0 volume to completely fail, you would need to have multiple drive failures within a short amount of time (which is typically quite uncommon).

    Most hardware RAID controllers come with management software that can alert you on failures.

    Most of my server experience is using HP products, so I'm mainly speaking from that point of view (although most other brands do something similar).

    Chris S : Most drives from several years ago would be fast enough for this application. There's no need to go to RAID 10 to eek out the last bit of performance.
    From
  • I cannot see from your description where the high network or disk IO load could occur. 5 GB is a very small repository for SCM, C# files are just a few KB in size and 20 devs are no problem at all. So you should concentrate on reliability of your setup, so a server with redundant power supplies and a RAID 1 should be fine. Your main concern should be disaster recovery but this is nothing a RAID setup will buy you as you are probably aware.

    : I want fast checkouts of 2GB of data (over the network) as well fast history/log queries.
    Chris S : My 4 year old server can pump out 2GB in <10 seconds. A newer server could do better. Worry about reliability, your throughput requirements are fairly conservative.
    From

Inexpensive degaussers or HDD shredders?

Apologies, I'll simplify my question: Are there any degaussers or HDD shredders out there in the $500-1500 range designed for use as such that you would recommend for low-volume use?

  • We use an inexpensive drill press. One or two passes and the platter is done.

    Webjedi : re your edit...I'm saying put the drill bit through the platter...not the motor...of course that doesn't help.
    Nicholas Knight : I understand what you meant perfectly, but that destroys data on only a small portion of the platter, recovery from large intact pieces is still possible, hence the need for a proper shredder (or degausser).
    : @Nicholas Knight: Or, it means you need to send it through the drill a couple of additional passes. Even if you're not drilling holes through the platter, use the spinning bit to gouge grooves all across the platter (scratching it up so badly that it can't be read).
    chris : If your "adversaries" can get the data off of a disk with 3 holes in it, you should be more afraid of them taking you aside and asking you for a more current copy of the data.
    The Journeyman geek : Actually, professionals tend to use the hole drilling method. I'd also suggest a dban wipe before drilling - just in case.
    From Webjedi
  • Thermite (be careful)

    Nicholas Knight : Are you going to pay for the insurance?
    MikeJ : I have used thermite to destroy about 50 HD's that we had piling up. One caution is the slag takes much longer than you think to cool.
    : If you are concerned about safety, you can use a cinder block to contain the thermite reaction. Find a cinder block that has a hole large enough to place the disk platter into, and set the cinder block (hole facing up) on concrete (parking lot works well). Place the platter into the cinder block and ignite the thermite inside the block. The block should contain the reaction, but back up 10-15 feet to be safe. As long as you place the thermite so it isn't in direct contact with the block, the concrete will withstand the heat and possible flying bits of scrap.
    From James
  • If the drives aren't capable of being wiped with software, mechanical destruction of the platter is about your only option (Good news! It's usually the fun way as well!).

    If you don't have a lot of devices to get rid of, you can try using thermite which is typically fairly inexpensive to obtain (depending on where you live) and is completely irreversible (some military aircraft use thermite to destroy stored records in flight computers if the aircraft crashes in a foreign country). Take the top cover off of the first drive to make sure you're placing the thermite so that it melts through as much of the platter as possible.

    A less fun but equally destructive method is to remove the individual platters and run the surface of them randomly across a bench grinder. You can probably borrow or rent one for a couple of hours and not have to spend much money compared to buying a degausser.

    From
  • I could have sworn this has been asked before, but I can't find a close duplicate...

    With only 10 disks to get rid of, you should do what I do: take the covers off and take the voice coil magnets out. They're rare earth magnets and some of them are quite large, they make great fridge magnets, and are useful for lots of other things.

    If you want to be very, very sure no-one could get the data off, the first thing to do is to wipe the voice coil magnet over the surface of the platter you can see, which will mess up the servo tracks, making it essentially impossible to recover the disk. (I have one magnet from an old DEC 1GB drive that is 1cm thick and can wipe a hard drive without taking the cover off.)

    I usually stop at getting the voice coil magnets out, but for an HR machine I once took the platters out and broke them and disposed of half of the pieces at work, half at home.

    Nicholas Knight : +1 for actually coming closest to solving the problem as intended.
    Chris S : +1 for using the "extra parts" for fridge magnets.
    Ward : Among other things, I use them to put reminder notes in places I can't miss, the magnets can hold a couple sheets of paper to a wall anywhere there's a drywall screw. I've used them to hang heavy objects from a window by putting one magnet on each side of the glass...
    chris : @ward -- be careful you're not using the "magnet on each side of the glass" on modern energy efficient glass that is actually made of 2 or 3 *really* thin sheets of glass and an air gap. It could make for a sad day when the two magnets come together *through* the air gap...
    From Ward
  • My father was a mechanic, and he used to say "the only two tools a good mechanic needs are a hammer, and a bigger hammer." I'd pull the platters, and smash them with a big hammer. Probably a good stress reliever. On place I worked would occasionally get PCMCIA hard-drives back from Cardiologiests with patient data on them, we found that banging the hard-drive flat on a bench would shatter the platters into tiny pieces, you coulc hear the glass shatter, then rattle around inside. Full-Size Hard drives are probably going to be a little tougher, but you should be able to damage them enough.

    On the LTO tapes, I'd probably open them up and take a knife to the tape while its on the reel, cutting the tape into a series of 3-4 inch sections.

    Farseeker : Bahaha, I would +1 for the hammer saying but I get the feeling that the OP doesn't appreciate these sorts of answer (yours was similar to mine, which I deleted)
    Chris S : +1 we use a 7# Sledge. It's excellent stress relief and as you stated, the platters are tiny pieces when you're done. For LTO Tapes nothing beats a fire pit. They stink a bit, but there's essentially nothing left.
    From BillN
  • 3 pieces of wood (at least 1" thick, 8" - 12" long) laid flat & lined up parallel to each other, attach the pieces to a 1/2 or 3/4" plywood base as follows. Allow about 2.75" between 1 & 2 (3.5" drive) and about 1.75" between 2 & 3 (for 2.5 drive). The drives will sit on top of the wood, you want a hollow space under them.

    One 2lb hammer, 1 railroad spike or other very large nail. Get a length of pipe (any kind) with a diameter just large enough to fit the spike, cut pipe length about 1/2"- 3/4" shorter than the length of the spike.

    Place drive(s) on platform (longer pieces of wood allows multiple drives at once), strike with hammer, repeat as needed, the pieces will be as small as needed or energy permits.

    Cheap, fast (no dissembly of drive required), pretty safe (goggles?) as thorough as you need it to be, no electricity required and the operation is very satisfying.

    Chris S : Yikes that's a lot of instructions for: beat with hammer.
    From Ed Fries
  • Use a belt sander on the platters. About $50-$150.