Saturday, January 29, 2011

How to configure Windows Server 2008 DHCP to supply unique subnet to a remote site?

The Main site hosts the only Windows Server. Windows Server 2008 R2 Domain Controller running AD, DNS, DHCP, Exchange 2007. Remote site has no Windows server.

Main site subnet is 192.168.1.0/24 Remote site subnet is 192.168.2.0/24

The Windows Server at Main site is supplying 192.168.1.0/24 via DHCP to hosts at the local site where it resides. Is it possible to configure that Windows Server to supply 192.168.2.0/24 to hosts at the Remote site and if so how?

We could use the Cisco router at the Remote site to supply DHCP but if possible we'd like to use the Windows Server at the Main site to supply DHCP.

  • No, not possible. As in: the remote site does not forward DHCP requests to he local site. This is becasue those are broadcast addresses which are NOT transmitted outside the Ethernet segment - i.e. they do not cross over the router.

    Yes, it is possible. You need to set up a DHCP relay system on the other side (can be part of the router) to forward DHCP requests to the Windows server. Then you set up a normal segment in the DHCP server.

    That said, the idea may be terrible. Problem is - whenever the link is down, and a computer gets online during this time, it ets no ip address and pretty muc hthe user needs to restart (unless you want to talk users through command line "ipconfig /renew"). DHCP has no concept (unlike IPv6 in general) for assigning addresses to computers post network activatio. Technically you would be better off to get a small servre and put it at the remote site. This can be a small ATOM based thing. This can serve as: * Local DHCP Server * Local Domain controller (same problem - link down, things get bad). * Local DNS server. * Possibly local file store, at leat for a special admin share so you have afast access to your tools.

    If you dont trust the remote site, using 2008 R2 yo ucan make the controller a RODC (Read Only Domain Controller). It sitll will stabilize operations.

    I would consider it bad practices to supply DHCP from your central site.

    caleban : I think the reason this whole idea came up is it seemed cheaper. It would be cheaper to use the single server at the main site than to set up a second server at the remote site i.e. purchase another license for Windows Server 2008 R2 and the client access licenses. 2008 R2 and the CALs for the remote site would be several thousand dollars.
    Stemen : But... how would that be cheaper than continuing to use DHCP on the Cisco router? Is something wrong with doing it that way? Personally, I'm in the middle of deploying a bunch of dhcp servers for a corporate VOIP system. Each server is running DHCPD on CentOS, on a PowerEdge box. Our priorities weren't cost, but reliability -- with failover enabled, we'll be able to server DHCP from either each machine in the field, or from a single server in our main datacenter.
    TomTom : Not cheaper - up to the moment you have a day or two off and people can not work because you were too cheap. I also would be another backup of the domain (how many domain controllers do you run?) You run a single server? Thought about the catastrphy cost or having to COMPLETELY REINSTALL ACTIVE DIRECTORY because you dont have a single backup unit? OUCH. I mean REALLY OUCH.
    TomTom : Costs for CAL - häh? Dont get me wrong, but either the remote sytems work against your server (so they already have a CAL), or they do not (then they dont need a CAL to access DHCP). ANY single server solution sounds like "i want a desaster" for me. Sometimes you can be TOO cheap.
    From TomTom

Server 2003 on domain wont let domain user have local profile

I have a few servers that are acting in this behavior, you log in and always get put into a temporary profile. The server is licensed for TS. The user I am testing with has local admin rights so it doesn't seem to be a permission issue on the server.

I'll first get a message that the users roaming profile cannot be found, even though we dont use roaming profiles. I then get another message immediately after saying a local profile could not be loaded, so it will only use a temp profile.

Any help would be greatly appreciated.

    1. Make sure their profile has been deleted and nothing exists at C:\Documents and Settings\%USERNAME% and C:\Documents and Settings\%USERNAME%.%DOMAN%
    2. Open up regedit
    3. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList
    4. Remove the key for the users with the problems. The key will be based on the users Security identifiers like S-1-5-21-3141592-6535897932-3846644798-1649.
      • You can look at the value for HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\S-1-5-21-3141592-6535897932-3846644798-1649\ProfileImagePath to help you figure out which profile is which.
    RobW : I did this, and now when i log on to the server, i get the normal log on screen.. fill in user name and password, and then the rdc connection ends w and w/o console switch applied as soon as i click the button.
    RobW : just tried another user as well, this user I get the same exact results as before. Trying to access roaming profile even though there isnt one, then cant find or create local profile either and logs on with temp.
    Zoredache : Hrm, that is unusual. Are you sure the user doesn't have roaming profile settings applied to their account? Can you check the eventlog after a login, there should be some errors being logged. I do know the above helped fix a situation that I was having that I believe was similar.
    RobW : I get the following errors in the event log (user and server name scrambled):
    RobW : Event Type: Error Event Source: Userenv Event Category: None Event ID: 1521 Date: 3/10/2010 Time: 1:14:01 PM User: zxzbz\zcbzcbzb Computer: zccb-zcbz-zcbzc Description: Windows cannot locate the server copy of your roaming profile and is attempting to log you on with your local profile. Changes to the profile will not be copied to the server when you logoff. Possible causes of this error include network problems or insufficient security rights. DETAIL - The network name cannot be found.
    RobW : Event Type: Error Event Source: Userenv Event Category: None Event ID: 1511 Date: 3/10/2010 Time: 1:14:04 PM User: czbcz\zcbzcb Computer: zcbczbzcbzcbzb Description: Windows cannot find the local profile and is logging you on with a temporary profile. Changes you make to this profile will be lost when you log off.
    RobW : No roaming profile is designated for the user in AD
    From Zoredache
  • Check your TS GPO for a roaming profile setting. The setting is at:

    Computer Configuration>Administrative Templates>Windows Components>Terminal Services>Set path for TS Roaming Profiles

    From joeqwerty
  • Go into the Group Policy Editor for the machine (gpedit.msc) Computer Config > Admin Templates > System > User Profiles > Only Allow Local Profile - Enable.

    This has been an outstanding issue and headache of mine for months... Yet a stupid fix, hopefully it helps someone else out!

    From RobW

How does java permgen relate to code size

I've been reading a lot about java memory management, garbage collecting et al and I'm trying to find the best settings for my limited memory (1.7g on a small ec2 instance) I'm wondering if there is a direct correlation between my code size and the permgen setting. According to sun:

The permanent generation is special because it holds data needed by the virtual machine to describe objects that do not have an equivalence at the Java language level. For example objects describing classes and methods are stored in the permanent generation.

To me this means that it's literally storing my class def'ns etc... Does this mean there is a direct correlation between my compiled code size and the permgen I should be setting? My whole app is about 40mb and i noticed we're using 256mb permgen. I'm thinking maybe we're using memory that could be better allocated to dynamic code like object instances etc...

  • Sun says that the permgen is storage for objects that don't have an equivalence in the Java language:

    A third generation closely related to the tenured generation is the permanent generation. The permanent generation is special because it holds data needed by the virtual machine to describe objects that do not have an equivalence at the Java language level. For example objects describing classes and methods are stored in the permanent generation.

    So yeah, the permanent generation contains "internal" JVM data structures that describe objects and methods, among other things. 256 MB might be rather large, but it would depend on the application.

    In a large ColdFusion app like I run, a huge amount of classes are created for ColdFusion code that is compiled on-the-fly, and I am running with a permanent generation set to 192 MB. I think I used to run it successfully at 128 MB as well, so it's definitely something you could experiment with.

    If you're measuring performance under load before and after you tune these GC parameters, you'll be able to tell what impact your changes have on your application's performance.

    Also, the document I linked gives a lot of information about how the JVM manages it's memory- it's quite good reading.

    brad : ya that's where i got my quote also. I just wasn't sure if there's extra stuff going in that perm gen aside from my own code. I literally have 45mb of compiled code. And the GC logs show permgen usually hovers around 50mb
    Clint Miller : Shrink it, then. Why not? Try 128 MB and let it run for a while.
  • Is your Tomcat configured to automatically redeploy applications that are dropped in the application folder? Do you use that functionality?

    If you use that functionality, there's a possibility that the amount of memory used in the PermGen space is gonna increase. This can happen due to (badly written) applications that in some way keep references to the loaded classes alive. Each time there's a redeploy, the amount of memory used in the PermGen space is gonna increase until it overflows.

    If you don't use that functionality and always have the same application running on the Tomcat server, then I would just try running Tomcat with the default settings for PermGen space. If the application loads, and runs fine for awhile, then it should be fine. If the application runs out of PermGen space, then just increase it in steps until the PermGen space is big enough.

    Why has it been configured to 256m (as seen in your other question) in the first place?

    Btw, yes, there is a correlation between the amount of loaded classes and the amount of needed space in the PermGen area. So, yes, the more code that is loaded, the more PermGen space you're gonna need.

    brad : 256 is what it was configured at before I came in so I'm not sure why it was chosen. We actually stop tomcat and restart on each deploy and it only runs one app so I don't think we'd have that problem of it keeping old apps in memory. In the logs I've never seen it go above 50mb so I've set it to 80mb just to give a bit of a buffer.
    From rubenvdg

java max heap size, how much is too much

I'm having issues with a JRuby (rails) app running in tomcat. Occasionally page requests can take up to a minute to return (even though the rails logs processed the request in seconds so it's obviously a tomcat issue).

I'm wondering what settings are optimal for the java heap size. I know there's no definitive answer, but I thought maybe someone could comment on my setup.

I'm on a small EC2 instance which has 1.7g ram. I have the following JAVA_OPTS:

-Xmx1536m -Xms256m -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled

My first thought is that Xmx is too high. If I only have 1.7gb and I allocated 1.5gb to java, i feel like I'll get a lot of paging. Typically my java process shows (in top) 1.1g res memory and 2g virtual.

I also read somewhere that setting the Xms and Xmx to the same size will help as it eliminates time spend on memory allocation.

I'm not a java person but I've been tasked with figuring out this problem and I'm trying to find out where to start. Any tips are greatly appreciated!!

update
I've started analyzing the garbage collection dumps using -XX:+PrintGCDetails

When i notice these occasional long load times, the gc logs go nuts. the last one I did (which took 25s to complete) I had gc log lines such as:

1720.267: [GC 1720.267: [DefNew: 27712K->16K(31104K), 0.0068020 secs] 281792K->254096K(444112K), 0.0069440 secs]
1720.294: [GC 1720.294: [DefNew: 27728K->0K(31104K), 0.0343340 secs] 281808K->254080K(444112K), 0.0344910 secs]

about 300 of them on a single request!!! Now, I don't totally understand why it's always GC'ng from ~28m down to 0 over and over.

  • Part of your problem is that you are probably starving all other processes for ram. My general rule of thumb for -Xms and -Xmx are as follows:

    -Xms : <System_Memory>*.5
    -Xmx : <System_Memeory>*.75

    So on a 4GB systems it would be: -Xms2048m -Xmx3072m, and in your case I would go with -Xms896m -Xmx1344

    From Zypher
  • While I haven't run any JRuby apps on Tomcat, I have run ColdFusion apps on varied J2EE app servers, and I also have had similar issues.

    In these FAQs, you'll see that SOracle says that on 32-bit Windows, you'll be limited to a max heap size of 1.4 to 1.6 GB. I never was able to get it stable that high, and I suspect you're running a similar configuration.

    My guess is that your requests are taking a long time to run b/c with a heap size that high, the JVM has allocated more physical memory than Windows had to give, and thus Windows spends a lot of time swapping pages in and out of memory to disk so it can provide the required amount of memory to the JVM.

    My recommendation, although counter-intuitive, would be that you actually lower the max heap size to somewhere around 1.2 GB. You can raise the min size as well, if you notice that there are slow-downs in the app's request processing while the JVM has to ask Windows for more memory to increase the size of its heap as it fills with uncollected objects.

    brad : i'm pretty sure you're right. Although we're running Linux (not windows), I'm pretty sure that the machine is swapping too much and having a hard time garbage collecting.
  • hi,

    in addiotion to the previous answers, you should also take PermGen into account. PermGen is not part of the heapspace. with your current configuration your java process could sum up to 1792mb which is the total amount of your machine.

    brad : ya i just read about that also, we're definitely starving the system with the combination of the perm and max heap
    Clint Miller : Oh yeah- that's a good point too. Do you have a reference for PermGen not being part of the heapspace?
    Christian : this post on stackoverflow explains the memory model and also ahs a link to a sun blog with an explanation of the PermGen: http://stackoverflow.com/questions/2129044/java-heap-terminology-young-old-and-permanent-generations you also see it in the gc log (when enabled): `0.431: [Full GC [PSYoungGen: 352K->0K(101952K)] [PSOldGen: 0K->330K(932096K)] 352K->330K(1034048K) [PSPermGen: 3959K->3959K(16384K)], 0.0187660 secs]` you can see that the PermGen is handled separately.
    From Christian
  • I know there's already an answer chosen, but still, here goes my explanation.

    First of all, in the commandline you use, you already reserve 1536 megabyte for the Java heap (-Xmx1536m) and 256 megabyte for the PermGen (-XX:MaxPermSize=256m). The PermGen is allocated separately from the Java heap and is used for storing the Java classes loaded in the JVM.

    These 2 areas together already add up to 1792 megabyte of RAM.

    But in addition to that, there is also RAM needed to load the JVM itself (the native code of the JVM) and RAM to store the code that is generated by the JIT compiler.

    I suspect all those add up to the 2 gigabyte virtual that you mentioned.

    Finally, you also have to take into account the other things that are running on the server and that need RAM too. You didn't really mention how much swap is in use on the server. This would tell you whether the machine is swapping and that is causing the application to react slowly. You should at all times prevent the JVM from hitting the swap. It's much much better to frequently trigger the garbage collector, than to allocate too much heap and have part of the Java heap being swapped out.

    From rubenvdg

What email server should I choose?

I need a secure email server installed in debian-lenny with users in a mysql table.
Also users are from multiple domains.
Quota should be in mysql or a global variable for all users.
What are my options ?
THanks in advance for your help.

  • I pretty much followed this guide: http://www.howtoforge.com/virtual-users-domains-postfix-courier-mysql-squirrelmail-debian-lenny

    Does what you require.

  • I run the following software on Debian Etch without any problems:

    • Postfix
    • Potstfixadmin
    • MySQL
    • Dovecot (courier etc will probably also work, but Iv'e found dovecot to be much slimmer on resources and easier to set up)

    Gives you virtual users and domains, vacation messages, imap(s), pop3(s), web-based user/alias/domain managment etc. Highly recommended.

    Edit: Here's a tutorial aswell - http://bliki.rimuhosting.com/space/knowledgebase/linux/mail/postfixadmin+on+debian+sarge

    From pauska
  • Although I like installing and configuring the different components of a complete mail system myself, you might be interested in iRedMail which is basically a script installing and configuring every needed component for you. It also brings a web interface for configuration.

    From joschi
  • postfix-policyd for your rate limiting/throttling

    http://packages.debian.org/lenny/postfix-policyd http://policyd.sourceforge.net/

  • I'm a huge QMAIL fan. It hasn't had a security issue ever. The toaster makes it easier to throw a box up if you're less familiar with it.

    Scott Lundberg : Ditto for me. There are some great tutorials that can be found at qmail.org in the documentation section. Also, Life With Qmail, by Dave Sill http://lifewithqmail.org/ is a great resource.
    pauska : I like Qmail aswell, but the patching of code (wich violates qmail's security guarantee) got to me, so I changed to Postfix.
    From Warner
  • Courier has the complete suite (MTA, POP3, IMAP) that all can share the same authentication-system (single configuration for validating against a database). I'm not sure about quota though.

    If you mix systems (psotfix + courier for example), you will need to setup them indvidually for the same database-schema.

    From jishi
  • Use Postfix, with virtual-domains.

    From eternal1

Feasibility of Windows Server 2008 DFS replication over WAN link

We have just set up a WAN link that connects two buildings in our organisation. The link is provided by a 100-Mbps point to point line. We have a Windows Server 2008 R2 domain controller on each side of the link.

Now we are planning to set up DFS for file services across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. My idea is to set up a file server in each building and install DFS so that all the contents stay replicated over the 100-Mbps link. I hope that this will ensure that any user will be directed to the closest (and fastest) server when requesting a file from the DFS folders.

My concern is whether a 100-Mbps WAN link is good enough to guarantee DFS replication. I've no experience with DFS, so any solid advice is welcome. The line is reliable (i.e. it doesn't crash often) and our data transfer tests show that a 5 MB/sec transfer rate is easily achieved. This is approximately 40% of the nominal bandwidth.

I am also concerned about the latency. I mean, how long will users need to wait to see one change on one side of the link after the change has been made on the other side.

My questions are: Is this link between networks a reliable infrastructure on which to set up DFS replication? What latency times would be typical (seconds, minutes, hours, days)? Would you recommend that we go for DFS in this scenario, or is there a better alternative? Many thanks.

  • It should work well on your link. We do it across much slower links and it works. We configure the replication so the highest bandwidth is used during off hours. I assumed you meant the new DFSR rather than the older version.

    CesarGon : Thanks Dave. What's the latency? I mean, how much do you typically need to wait to see an update on one side after a change has been made on the other side?
    TomTom : Depends on the change. The change is analysed and compressed, then queued for transfer. Normally (small files, free bandwidth) not more than 30 seconds or so. Dump a DVD in and that one will take some minutes, because compressing it first takes time, as does decompressing on the receiving end.
    CesarGon : Understood. Thanks for clarifying.
    From Dave M
  • DFS... like the old or the new replication mechanism?

    The old one - not feasible at all, not evne over LAN link... i twas unreliable for larger scenarios.

    The new one (DFS Replication) - yes, sure. Works perfectly. It is very reliable, it will queue as it needs. As long as your link has enough bandwith overall tings will eventually work. I am keeping up a number of links over 512kbit and sometimes queue 20gb for transfer.... Takes some days, but it works.

    CesarGon : Well, we are using Windows Server 2008 R2, as I say in the original post, so I am talking about the "new mechanism" I guess. :-)
    CesarGon : When you say that it takes some days, do you mean that a change made on one side of the link would take *days* before it is relpicated to the other side? That would make DFS replication unsuitable for us...
    TomTom : What I said is that transferring MY 20gb change over MY 512kb link takes some days. now think about why ;) naturally if you change more data than the pipe can handle in reasonable time - with compression - no replication software will magically make it appear. If you need to transfer 2000gb daily, 100mbit may simlpy not be enough. Depends how many changes yo uhave. I merely showed reliability over small pipes with tons of data.
    CesarGon : I understand. Many thanks for the clarification.
    TomTom : Just as end note - check whether it works for you. if it does NOT - it is not the technology, it is either the requirements OR... the link is simply not enough. DFS replication is very well made. Acutally... update your sysvol replication to it ;)
    tony roth : not sure if this has been mentioned but you can use windows backup to prestage a local server then move the entire server or just drives to the remote site.
    From TomTom
  • Is there a chance that the same file will be edited by two different users simultaneously on the two replicas? DFS doesn't provide a distributed locking mechanism to protect from this.

    CesarGon : Yes, that scenario would be possible unless we explicitly prevent it. Thanks for pointing this out. How is this limitation usually addressed?
    Fred : Sadly, there's no silver bullet. Depending on geographic locations of your users, it might be that their work hours don't overlap and thus edits won't collide. Can you carve up the data set so that files are located close to the users who need them the most? That way you have a single copy of the data and normal Windows locking helps with the collision issue. If you really want simultaneous bi-directional replication, DFS-R may not suffice for your use-case.
    CesarGon : All users are within the same time zone, unfortunately. I could split the data set into two halves and put each half closest to the users who need it the most, and replicate it to the other side in a read-only fashion. Is that a good solution?
    From Fred
  • Hi, I'd be very interested in how you found this to perform with large data sizes. I'm looking for a similar solution (with approx 10TB & 100MB L2 links) to replicate data from the East Coast to Europe. I've looked at several solutions but if it works as it should, the 'free' option (DFSR) should be good enough...I think. Or does anyone know a better product (ideally one that in some way does solve the problem of two simultaneous edits)? Thanks!

    CesarGon : Hi Matt. We will be implementing DFSR across the 100 MB link shortly. I will let you know how it goes.
    From Matt

Setting Default Printers in Login Script?

I've got a login script setup, that removes all old printers, and then adds the current set of network printers.

CODE

Set WSHPrinters = WSHNetwork.EnumPrinterConnections
For LOOP_COUNTER = 0 To WSHPrinters.Count - 1 Step 2
  If Left(WSHPrinters.Item(LOOP_COUNTER +1),2) = "\\" Then
    WSHNetwork.RemovePrinterConnection WSHPrinters.Item(LOOP_COUNTER +1),True,True
  End If
Next

'Install Network Printers
WSHNetwork.AddWindowsPrinterConnection "\\SERVER\PRINTER1"
WSHNetwork.AddWindowsPrinterConnection "\\SERVER\PRINTER2"
WSHNetwork.AddWindowsPrinterConnection "\\SERVER\PRINTER3"
WSHNetwork.AddWindowsPrinterConnection "\\SERVER\PRINTER4"

This is fine, but seems to reset the current default printer on the users machine.

Is there a way to preserve the current default printer on the users machine?

Is this the most sensible way to confer network printers for users upon login? Or are there alternative or better ways to do so?

Any help is very much appreciated.

  • Why use a script at all?

    I roll out printer configurations (and mapped drives) using the client extensions of the group policy mechanism (that was introduced some long time ago and is part of all windows updates since years).

    Roy : Can you be more specific? Can you please link to a guide? Thanks
    TomTom : http://blog.mpecsinc.ca/2008/12/sbs-2008-group-policy-client-side.html
    Ian Bowes : Are the Client Side Extensions of GP only available through Windows Server 2008? We're running Windows Server 2003 SP2 at the moment. I'd be very interested in taking the setup of Printers and Drive mappings out of a login script and into some sort of formal Group Policy, but I haven't any clue how to do it, or where to start.
    TomTom : They should be available in 2003 / XP. Separate downloads, though (as in: if your systems are fully patched, they should be there).
    From TomTom
  • Last time I scripted that, I added a group for each printer in AD - then added the user to whatever printer group was suppose to be his or her default - and in the login script checked for this group membership, setting the appropriate default.

    Obviously, that environment was pretty fixed so this was easy to determine - putting the burden on setting a default printer on the templates instead of the poor user (who could still temporarily change it manually when needed). A more obvious approach might be to check what the default printer is before removing the printers, and then (if that printer still exists after your script) re-apply the default printer setting.

    But as TomTom writes, these days printers can be connected using group policies - and then you shouldn't experience any of your mentioned problems anyway.

    I also recall doing a registry dump of the Printers registry key and then just importing it being stupendously fast, if you've got the possibility to freeze system configurations (like on a TS) it's quite fun, though not very maintainable ;)

    Ian Bowes : I've since modified the login script itself to do exactly what you suggested. However, we have a number of users who have local printers that will be set as default, and presumably the login script will rejig these around. I've read that removing network printers and adding them again is a good practice because it prevents some problems with multiple SPOOLSS, but if there was a way to circumvent this seemingly arcane practice in login scripts, and manage it with GP I'd be very interested to hear about how to go about it, or be pointed in the right direction.
  • I use an old exe that came in the WinNT resource kit called con2prt.exe.

    The best way to call it would be from your VBS login script as follows:

    'Mapping printers needed by everyone
    Set WSHShell = CreateObject("Wscript.Shell")
    WSHShell.Run ("\\SERVER\SYSVOL\SERVER.local\scripts\map_printers.bat")
    

    And the Map_Printers.bat should contain

    :: Map Printers
    : HP 1600
    \\SERVER\SYSVOL\server.local\scripts\con2prt.exe /cd \\SERVER\HP1600
    :: Ricoh Aficio 2035e 
    \\SERVER\SYSVOL\server.local\scripts\con2prt.exe /c \\SERVER\RICOH2035
    :: Samsung ML-2010
    \\SERVER\SYSVOL\server.local\scripts\con2prt.exe /c \\SERVER\SamsML2010
    :: HP BusinessInket 2230
    \\SERVER\SYSVOL\server.local\scripts\con2prt.exe /c \\SERVER\HP2230
    

    The /cd means set deafult.

    You can find out all the commands by running con2prt.exe /?

    Also - you can download here : http://www.paulmcgrath.net/download.php?view.2