Thursday, March 31, 2011

Developing a custom on-screen display application for a laptop.

I would love to create a custom on-screen display app for my laptop, seeing as the manufacturer-supplied one is utter garbage. I'm out to write an app that will show things such as volume control, screen brightness, wireless antenna status and so forth. I'm not sure what info I'd need to know to begin creating something like this (interfaces for grabbing key/button presses, system priority, any other relevant data) and wondered if anyone could help point me in the right direction. If possible I'd like to develop it in C#.

For example, when I hold in the "Fn" key on my laptop keyboard and press the "F5" key, the volume is decreased. There is a visual representation of this action by way of an on-screen graphic that shows the current sound level being decreased. I want to replace the native graphic because, well, I just don't like it :D.

From stackoverflow
  • I use AutoHotKey for this purpose. The programming language/framework is kind of oddball but it's very powerful and there are lots of freely available scripts out there that do amazing things (well, amazing when you consider the language ;-). Adam Pash (from LifeHacker.com) has done a lot of good work with AutoHotKey.

    I haven't done anything with custom graphics but here's what my volume control code looks like (it's tied to Win-Plus and Win-Minus hotkeys).

    #=::
        SoundSetWaveVolume, +5
        Gosub, osd
        Return
    
    #-::
        SoundSetWaveVolume, -5
        Gosub, osd
        Return
    
    ;; onscreen display for volume control
    osd:
        IfWinNotExist, volume
        {
         Progress, b1, volume:, , volume
        }
        WinSet, Transparent, 220, volume
        SoundGet, Volumelvl, wave, volume
        Transform, Volumelvl,  Round, %Volumelvl%
        Progress, %Volumelvl%, volume: %Volumelvl%`%
        SetTimer, osd_off, 1000
        Return
    
    osd_off:
        Progress, off
    
    Neitherman : For now I don't see any reason not to use this app, thanks!

ASP.Net Object Data Source - Data Binding

At what point does a ASP.Net Object Data Source bind data, from the specified data source, in the page life cycle?

From stackoverflow
  • From the Page Life Cycle Overview, referring to the databinding event:

    This event is raised by data-bound controls before the PreRender event of the containing control (or of the Page object) and marks the beginning of binding the control to the data.

    And regarding the DataBound event:

    This event marks the end of data-binding operations in a data-bound control. In a GridView control, data binding is complete for all rows and any child controls. Use this event to format data bound content or to initiate data binding in other controls that depend on values from the current control's content.

  • As Joel stated, binding happens in PreRender. If your really interested you can take a look at BaseDataBoundControl.OnPreRender and you'll see the code that is responsible for this.

How can I capture rich text as input to a web form?

I need to have the user be able to cut-and-paste rich text into a big text box on my web page. How do I do that?

From stackoverflow
  • You can install a JavaScript editor like TinyMCE to allow for rich text. This particular WYSIWYG editor has a "Paste from Word" function that helps reduce all the nasty character encoding problems you typically have when users attempt to publish Word documents on the web.

    ohnoes : Please note that you can't do RTF paste to TinyMCE, at least on Mac (the browser is not the case)
  • I guess you can do it only by using IE/Windows/TinyMCE-like-widget set.

    Here's some information about clipboards.

In NAnt, can I create a fileset of the files listed in a VS project?

I'm rewriting our NAnt build scripts to make them cleaner and simpler, as well as more general purpose.

One of the steps in our build process is to package certain files into a zip file. Before we had created a fileset with lots of included file types, trying to catch everything that might be in the project. This occasionally caused problems when we wanted to include a non-standard file type with the project.

I was hoping that I could find a way to create a fileset based on the files (and, also, a subset of the files) listed in the Visual Studio project file (.csproj). I could then use this fileset in a zip task. I haven't found anything that does this automatically (though the now-deprecated SLiNgshoT task in NAntContrib looked a little promising).

I started down the path of trying to do this manually, but have gotten stuck. I have a target that gets the project from the solution (using regex), then tries to get each Content file in the project using xmlpeek (with the XPath query /n:Project/n:ItemGroup/n:Content/@Include). The problem here is that xmlpeek doesn't return every value, just the first. And, even if it did return every value, I'm not sure how I'd get it into a fileset from here.

Is there any way to salvage this track of thinking? Can I accomplish what I want with a custom NAnt task (I'd prefer not to have every project dependent on a custom task)? Is there some built-in way to do this that I'm not finding?

Please feel free to ask questions in the comments if something about my goal or method isn't clear.

Thanks!


UPDATE: To clarify, my point in all of this is to make the whole process much more seamless. While I can easily add all .xml files to a package, this often gets .xml files that live in the same folder but aren't really part of the project. When I already have a list of the files that the project uses (separated out between Content and other types, even), it seems a shame not to use that information. Ultimately, no one should have to touch the build file to change what gets included in a package. It doesn't seem like too much of a pipe dream to me...

From stackoverflow
  • See this related question. The microsoft.build.buildengine interface should let you get much better access to the information you need, but unfortunately I think you would have to build a custom task.

    But I think that parsing a project file to generate a fileset that is used elsewhere is ... a bit much to put all inside your build script. And since you seem to want to re-use this, I don't think that depending on a custom task is any worse than any other form of re-use (even if you can do it all in existing Nant tasks, you'd still need to have every project inherit that process somehow).

    bdukes : That code sample from the other question should be helpful to build out this custom task. I definitely would have gone down the regular expression and XML parsing path, otherwise.
  • As far as reading your project files. I've done something like this before. I ended up just writing a program that read the project files and built a custom project by project build file.

    However I was trying to compile my code base. Looks like you are trying to just zip it.

    Considering that the Zip task would allow you to run multiple filesets into it you could create some generic filesets and then just update for your specific ones rather easily.

    So you could have something like this:

    <fileset id="project.source.objects">
      <include name="**/*.cs"/>
      <include name="**/*.xml"/>
      <include name="**/*.resx"/>
    </fileset>
    
    <fileset id="project.misc.sources">
      <include name="MyFile1.someext"/>
    </fileset>
    

    Then in your zip target just put:

    <zip zipfile="myzip.zip">
      <fileset refid="project.source.objects"/>
      <fileset refid="project.misc.sources"/>
    </zip>
    

    Now if you wanted project specifics. There is one other thing you could do. If you are using VS2005 or greater search on T4. It's a template framework for creating plugins that allow you have extra files generated when you do something. Kind of like when you do a class diagram or have a XSD (and it gets it's code behind file, it's done this way).

    In most projects that I've worked on we maintain a build file for each project and just keep it up to date. Since we have the generic filesets that's a start so we only really need to update the file for the obscure stuff.

    Hope that helps. If you could provide a little more detail I might be able to help further.

    bdukes : That's a good idea to use named filesets to keep the unchanging, generic stuff in one place.
  • I like the first part of Josh's idea.

    Write a small console app to pull the files from the .csproj (since it's just XML in there) and write the list out to a file.

    Instead of NAnt's zip task, I would use 7-zip to create the archive with the list you generated in the previous step:

    (I'm creating a self extracting archive here)

    <target name="ZipBuild">
      <exec program="${SevenZip}" workingdir="${SolutionPath}/Installation/Build.Staging">
        <arg value="a" />
        <arg value="-sfx" />
        <arg value="ReferenceBuild-${build.number}.exe" />
        <arg value="@mycustomfilelist.txt" />
      </exec>
    </target>
    

    Hope that helps,

    -jason

What strategy do you use for package naming in Java projects and why?

I thought about this awhile ago and it recently resurfaced as my shop is doing its first real Java web app.

As an intro, I see two main package naming strategies. (To be clear, I'm not referring to the whole 'domain.company.project' part of this, I'm talking about the package convention beneath that.) Anyway, the package naming conventions that I see are as follows:

  1. Functional: Naming your packages according to their function architecturally rather than their identity according to the business domain. Another term for this might be naming according to 'layer'. So, you'd have a *.ui package and a *.domain package and a *.orm package. Your packages are horizontal slices rather than vertical.

    This is much more common than logical naming. In fact, I don't believe I've ever seen or heard of a project that does this. This of course makes me leery (sort of like thinking that you've come up with a solution to an NP problem) as I'm not terribly smart and I assume everyone must have great reasons for doing it the way they do. On the other hand, I'm not opposed to people just missing the elephant in the room and I've never heard a an actual argument for doing package naming this way. It just seems to be the de facto standard.

  2. Logical: Naming your packages according to their business domain identity and putting every class that has to do with that vertical slice of functionality into that package.

    I have never seen or heard of this, as I mentioned before, but it makes a ton of sense to me.

    1. I tend to approach systems vertically rather than horizontally. I want to go in and develop the Order Processing system, not the data access layer. Obviously, there's a good chance that I'll touch the data access layer in the development of that system, but the point is that I don't think of it that way. What this means, of course, is that when I receive a change order or want to implement some new feature, it'd be nice to not have to go fishing around in a bunch of packages in order to find all the related classes. Instead, I just look in the X package because what I'm doing has to do with X.

    2. From a development standpoint, I see it as a major win to have your packages document your business domain rather than your architecture. I feel like the domain is almost always the part of the system that's harder to grok where as the system's architecture, especially at this point, is almost becoming mundane in its implementation. The fact that I can come to a system with this type of naming convention and instantly from the naming of the packages know that it deals with orders, customers, enterprises, products, etc. seems pretty darn handy.

    3. It seems like this would allow you to take much better advantage of Java's access modifiers. This allows you to much more cleanly define interfaces into subsystems rather than into layers of the system. So if you have an orders subsystem that you want to be transparently persistent, you could in theory just never let anything else know that it's persistent by not having to create public interfaces to its persistence classes in the dao layer and instead packaging the dao class in with only the classes it deals with. Obviously, if you wanted to expose this functionality, you could provide an interface for it or make it public. It just seems like you lose a lot of this by having a vertical slice of your system's features split across multiple packages.

    4. I suppose one disadvantage that I can see is that it does make ripping out layers a little bit more difficult. Instead of just deleting or renaming a package and then dropping a new one in place with an alternate technology, you have to go in and change all of the classes in all of the packages. However, I don't see this is a big deal. It may be from a lack of experience, but I have to imagine that the amount of times you swap out technologies pales in comparison to the amount of times you go in and edit vertical feature slices within your system.

So I guess the question then would go out to you, how do you name your packages and why? Please understand that I don't necessarily think that I've stumbled onto the golden goose or something here. I'm pretty new to all this with mostly academic experience. However, I can't spot the holes in my reasoning so I'm hoping you all can so that I can move on.

Thanks in advance!

From stackoverflow
  • It depends on the granularity of your logical processes?

    If they're standalone, you often have a new project for them in source control, rather than a new package.

    The project I'm on at the moment is erring towards logical splitting, there's a package for the jython aspect, a package for a rule engine, packages for foo, bar, binglewozzle, etc. I'm looking at having the XML specific parsers/writers for each module within that package, rather than having an XML package (which I have done previously), although there will still be a core XML package where shared logic goes. One reason for this however is that it may be extensible (plugins) and thus each plugin will need to also define its XML (or database, etc) code, so centralising this could introduce problems later on.

    In the end it seems to be how it seems most sensible for the particular project. I think it's easy to package along the lines of the typical project layered diagram however. You'll end up with a mix of logical and functional packaging.

    What's needed is tagged namespaces. An XML parser for some Jython functionality could be tagged both Jython and XML, rather than having to choose one or the other.

    Or maybe I'm wibbling.

    Tim Visher : I don't fully understand your point. I think what you're saying is that you should just do what makes the most sense for your project and for you that's logical with a little functional thrown in. Are there specific practical reasons for this?
    JeeBee : As I said, I'm going to have plugins to augment in-built functionality. A plugin is going to have to be able to parse and write its XML (and eventually into the DB) as well as provide functionality. Therefore do I have com.xx.feature.xml or com.xx.xml.feature? The former seems neater.
  • Most java projects I've worked on slice the java packages functionally first, then logically.

    Usually parts are sufficiently large that they're broken up into separate build artifacts, where you might put core functionality into one jar, apis into another, web frontend stuff into a warfile, etc.

    Tim Visher : What might that look like? domain.company.project.function.logicalSlice?
    Ben Hardy : pretty much! e.g. d.c.p.web.profiles, d.c.p.web.registration, d.c.p.apis, d.c.p.persistence etc. generally works pretty well, your mileage may vary. depends also if you're using domain modeling - if you have multiple domains, you may want to split by domain first.
    mP : I personally prefer to the opposite, logical then functional. apples.rpc, apples.model and then banana.model, banana.store
    mP : There is more value in being able to group all the apple stuff together than grouping all the web stuff together.
  • I would personally go for functional naming. The short reason: it avoids code duplication or dependency nightmare.

    Let me elaborate a bit. What happens when you are using an external jar file, with its own package tree? You are effectively importing the (compiled) code into your project, and with it a (functionally separated) package tree. Would it make sense to use the two naming conventions at the same time? No, unless that was hidden from you. And it is, if your project is small enough and has a single component. But if you have several logical units, you probably don't want to re-implement, let's say, the data file loading module. You want to share it between logical units, not have artificial dependencies between logically unrelated units, and not have to choose which unit you are going to put that particular shared tool into.

    I guess this is why functional naming is the most used in projects that reach, or are meant to reach, a certain size, and logical naming is used in class naming conventions to keep track of the specific role, if any of each class in a package.

    I will try to respond more precisely to each of your points on logical naming.

    1. If you have to go fishing in old classes to modify functionalities when you have a change of plans, it's a sign of bad abstraction: you should build classes that provide a well defined functionality, definable in one short sentence. Only a few, top-level classes should assemble all these to reflect your business intelligence. This way, you will be able to reuse more code, have easier maintenance, clearer documentation and less dependency issues.

    2. That mainly depends on the way you grok your project. Definitely, logical and functional view are orthogonal. So if you use one naming convention, you need to apply the other one to class names in order to keep some order, or fork from one naming convention to an other at some depth.

    3. Access modifiers are a good way to allow other classes that understand your processing to access the innards of your class. Logical relationship does not mean an understanding of algorithmic or concurrency constraints. Functional may, although it does not. I am very weary of access modifiers other than public and private, because they often hide a lack of proper architecturing and class abstraction.

    4. In big, commercial projects, changing technologies happens more often than you would believe. For instance, I have had to change 3 times already of XML parser, 2 times of caching technology, and 2 times of geolocalisation software. Good thing I had hid all the gritty details in a dedicated package...

    Tim Visher : Forgive me if I'm wrong, but it sounds to me like you're more talking about developing a framework that's intended to be used by many people. I think the rules change between that type of development and developing end user systems. Am I wrong?
    Tim Visher : I think for common classes used by many of the vertical slices, it makes sense to have something like a `utils` package. Then, any classes that need it can simply depend on that package.
    Varkhan : Once again, size matters: when a project becomes big enough, it needs to be supported by several people, and some kind of specialization will occur. To ease interaction, and prevent mistakes, segmenting the project in parts becomes quickly necessary. And the utils package will soon become ginormous!
  • From a purely practical standpoint, java's visibility constructs allow classes in the same package to access methods and properties with protected and default visibility, as well as the public ones. Using non-public methods from a completely different layer of the code would definitely be a big code smell. So I tend to put classes from the same layer into the same package.

    I don't often use these protected or default methods elsewhere - except possibly in the unit tests for the class - but when I do, it is always from a class at the same layer

    Tim Visher : Isn't it something of the nature of multi-layered systems to always be dependent on the next layer down? For instance, the UI depends on your services layer which depends on your domain layer, etc. Packaging vertical slices together seems to shield against excessive inter-package dependencies, no?
    Esko Luontola : I don't use protected and default methods nearly ever. 99% are either public or private. Some exceptions: (1) default visibility for methods used only by unit tests, (2) protected abstract method which is used only from the abstract base class.
    Esko Luontola : Tim Visher, dependencies between packages are not a problem, as long as the dependencies always point in the same direction and there are no cycles in the dependency graph.
  • Packages are to be compiled and distributed as a unit. When considering what classes belong in a package, one of the key criteria is its dependencies. What other packages (including third-party libraries) does this class depend on. A well-organized system will cluster classes with similar dependencies in a package. This limits the impact of a change in one library, since only a few well-defined packages will depend on it.

    It sounds like your logical, vertical system might tend to "smear" dependencies across most packages. That is, if every feature is packaged as a vertical slice, every package will depend on every third party library that you use. Any change to a library is likely to ripple through your whole system.

    Tim Visher : Ah. So one thing that doing horizontal slices does is shield you from actual upgrades in the libraries you're using. I think you're right that the vertical slices smear every dependency your system has across every package. Why is this such a big deal?
    erickson : For a "feature", it's not such a big deal. They tend to be more volatile and have more dependencies anyway. On the other hand, this "client" code tends to have low re-usability. For something you intend to be a reusable library, isolating clients from every little change is a great investment.
  • I find myself sticking with Uncle Bob's package design principles. In short, classes which are to be reused together and changed together (for the same reason, e.g. a dependency change or a framework change) should be put in the same package. IMO, the functional breakdown would have better chance of achieving these goals than the vertical/business-specific break-down in most applications.

    For example, a horizontal slice of domain objects can be reused by different kinds of front-ends or even applications and a horizontal slice of the web front-end is likely to change together when the underlying web framework needs to be changed. On the other hand, it's easy to imagine the ripple effect of these changes across many packages if classes across different functional areas are grouped in those packages.

    Obviously, not all kinds of software are the same and the vertical breakdown may make sense (in terms of achieving the goals of reusability and closeability-to-change) in certain projects.

    Tim Visher : Thanks, this is very clear.As I said in the question, I feel like the number of times you would swap out technologies is far less than you would shift around vertical slices of functionality. Has this not been your experience?
    Buu Nguyen : It's not just about technologies. Point #1 of your original post only makes sense if the vertical slices are independent applications/services communicating with each other via some interface (SOA, if you will). more below...
    Buu Nguyen : Now as you move into the detail of each of those fine-grained app/service which has its own gui/business/data, I can hardly imagine that changes in a vertical slice, be it about technology, dependency, rule/workflow, logging, security, UI style, can be completely isolated from other slices.
  • I try to design package structures in such a way that if I were to draw a dependency graph, it would be easy to follow and use a consistent pattern, with as few circular references as possible.

    For me, this is much easier to maintain and visualize in a vertical naming system rather than horizontal. if component1.display has a reference to component2.dataaccess, that throws off more warning bells than if display.component1 has a reference to dataaccess. component2.

    Of course, components shared by both go in their own package.

    Tim Visher : As I understand you then, you would advocate for the vertical naming convention. I think this is what a *.utils package is for, when you need a class across multiple slices.
  • There are usually both levels of division present. From the top, there are deployment units. These are named 'logically' (in your terms, think Eclipse features). Inside deployment unit, you have functional division of packages (think Eclipse plugins).

    For example, feature is com.feature, and it consists of com.feature.client, com.feature.core and com.feature.ui plugins. Inside plugins, I have very little division to other packages, although that's not unusual too.

    Update: Btw, there is great talk by Juergen Hoeller about code organization at InfoQ: http://www.infoq.com/presentations/code-organization-large-projects. Juergen is one of architects of Spring, and knows a lot about this stuff.

    Tim Visher : I don't quite follow. Usually you might see com.apache.wicket.x where x is either functional or logical. I usually don't see com.x. I guess you're saying that you would go with a com.company.project.feature.layer structure? Do you have reasons?
    Peter Štibraný : Reason is that "com.company.project.feature" is unit of deployment. At this level, some features are optional, and can be skipped (i.e. not deployed). However, inside feature, things are not optional, and you usually want them all. Here it makes more sense to divive by layers.
  • It depends. In my line of work, we sometimes split packages by functions (data access, analytics) or by asset class (credit, equities, interest rates). Just select the structure which is most convenient for your team.

    Tim Visher : Is there any reason for going either way?
    quant_dev : Splitting by asset class (more generally: by business domain) makes it easier for the new people to find their way through the code. Splitting by function is good for encapsulation. For me, "package" access in Java is akin to a "friend" in C++.
  • For package design, I first divide by layer, then by some other functionality.

    There are some additional rules:

    1. layers are stacked from most general (bottom) to most specific (top)
    2. each layer has a public interface (abstraction)
    3. a layer can only depend on the public interface of another layer (encapsulation)
    4. a layer can only depend on more general layers (dependencies from top to bottom)
    5. a layer preferably depends on the layer directly below it

    So, for a web application for example, you could have the following layers in your application tier (from top to bottom):

    • presentation layer: generates the UI that will be shown in the client tier
    • application layer: contains logic that is specific to an application, stateful
    • service layer: groups functionality by domain, stateless
    • integration layer: provides access to the backend tier (db, jms, email, ...)

    For the resulting package layout, these are some additional rules:

    • the root of every package name is <prefix.company>.<appname>.<layer>
    • the interface of a layer is further split up by functionality: <root>.<logic>
    • the private implementation of a layer is prefixed with private: <root>.private

    Here is an example layout.

    The presentation layer is divided by view technology, and optionally by (groups of) applications.

    com.company.appname.presentation.internal
    com.company.appname.presentation.springmvc.product
    com.company.appname.presentation.servlet
    ...
    

    The application layer is divided into use cases.

    com.company.appname.application.lookupproduct
    com.company.appname.application.internal.lookupproduct
    com.company.appname.application.editclient
    com.company.appname.application.internal.editclient
    ...
    

    The service layer is divided into business domains, influenced by the domain logic in a backend tier.

    com.company.appname.service.clientservice
    com.company.appname.service.internal.jmsclientservice
    com.company.appname.service.internal.xmlclientservice
    com.company.appname.service.productservice
    ...
    

    The integration layer is divided into 'technologies' and access objects.

    com.company.appname.integration.jmsgateway
    com.company.appname.integration.internal.mqjmsgateway
    com.company.appname.integration.productdao
    com.company.appname.integration.internal.dbproductdao
    com.company.appname.integration.internal.mockproductdao
    ...
    

    Advantages of separating packages like this is that it is easier to manage complexity, and it increases testability and reusability. While it seems like a lot of overhead, in my experience it actually comes very natural and everyone working on this structure (or similar) picks it up in a matter of days.

    Why do I think the vertical approach is not so good?

    In the layered model, several different high-level modules can use the same lower-level module. For example: you can build multiple views for the same application, multiple applications can use the same service, multiple services can use the same gateway. The trick here is that when moving through the layers, the level of functionality changes. Modules in more specific layers don't map 1-1 on modules from the more general layer, because the levels of functionality they express don't map 1-1.

    When you use the vertical approach for package design, i.e. you divide by functionality first, then you force all building blocks with different levels of functionality into the same 'functionality jacket'. You might design your general modules for the more specific one. But this violates the important principle that the more general layer should not know about more specific layers. The service layer for example shouldn't be modeled after concepts from the application layer.

  • I personally prefer grouping classes logically then within that include a subpackage for each functional participation.

    Goals of packaging

    Packages are after all about grouping things together - the idea being related classes live close to each other. If they live in the same package they can take advantage of package private to limit visibility. The problem is lumping all your view and persitance stuff into one package can lead to a lot of classes being mixed up into a single package. The next sensible thing to do is thus create view, persistance, util sub packages and refactor classes accordingly. Underfortunately protected and package private scoping does not support the concept of the current package and sub package as this would aide in enforcing such visibility rules.

    I see now value in separation via functionality becase what value is there to group all the view related stuff. Things in this naming strategy become disconnected with some classes in the view whilst others are in persistance and so on.

    An example of my logical packaging structure

    For purposes of illustration lets name two modules - ill use the name module as a concept that groups classes under a particular branch of a pacckage tree.

    apple.model apple.store banana.model banana.store

    Advantages

    A client using the Banana.store.BananaStore is only exposed to the functionality we wish to make available. The hibernate version is an implementation detail which they do not need to be aware nor should they see these classes as they add clutter to storage operations.

    Other Logical v Functional advantages

    The further up towards the root the broader the scope becomes and things belonging to one package start to exhibit more and more dependencies on things belonging to toher modules. If one were to examine for example the "banana" module most of the dependencies would be limited to within that module. In fact most helpers under "banana" would not be referenced at all outside this package scope.

    Why functionality ?

    What value does one achieve by lumping things based on functionality. Most classes in such a case are independent of each other with little or no need to take advantage of package private methods or classes. Refactoring them so into their own subpackages gains little but does help reduce the clutter.

    Developer changes to the system

    When developers are tasked to make changes that are a bit more than trivial it seems silly that potentially they have changes that include files from all areas of the package tree. With the logical structured approach their changes are more local within the same part of the package tree which just seems right.

    eljenso : "When developers [...] files from all areas of the package tree." You are right when you say this seems silly. Because this is exactly the point of proper layering: changes *do not* ripple through the entire structure of the application.
    Tim Visher : Thanks for the advice. I'm not sure I'm understanding you clearly. I believe you're trying to argue for Logical packages, I'm not exactly clear why. Could you try to make your answer a little clearer, possibly rewording? Thanks!
  • I totally follow and propose the logical ("by-feature") organization! A package should follow the concept of a "module" as closely as possible. The functional organization may spread a module over a project, resulting in less encapsulation, and prone to changes in implementation details.

    Let's take an Eclipse plugin for example: putting all the views or actions in one package would be a mess. Instead, each component of a feature should go to the feature's package, or if there are many, into subpackages (featureA.handlers, featureA.preferences etc.)

    Of course, the problem lies in the hierarchical package system (which among others Java has), which makes the handling of orthogonal concerns impossible or at least very difficult - although they occur everywhere!

  • It is an interesting experiment not to use packages at all (except for the root package.)

    The question that arises then, is, when and why it makes sense to introduce packages. Presumably, the answer will be different from what you would have answered at the beginning of the project.

    I presume that your question arises at all, because packages are like categories and it's sometimes hard to decide for one or the other. Sometimes tags would be more appreciate to communicate that a class is usable in many contexts.

How to check if a web service is available using .NET

If a .NET application needs to call a web service is there a good solution for checking if the web service is currently up and available, outside of calling a specific web method?

On my web services I usually include some sort of status method for testing, but this becomes an issue when using a third party web service.

From stackoverflow
  • You would need to call a method regardless, otherwise you wouldn't know if the web service itself was running. I.e. The server could be running but the web service stopped.

    We do this for some of our web services and it exposes neat functionality. We can have it return true/false and a failed call or false means its down. This lets you "return true;" always or conditionally return false if you choose to, such as blocking specific clients or turning the service off without actually stopping the web service, etc.

  • You could always do a simple ping to your destination to see if you get a response pack. Microsoft has a simple way of doing this outlined here.

    Granted this won't verify if the actual service is up and running, but it would save the overhead of preparing a full fledged service call.

    Allen : You would only save milliseconds and it would not be something you could trust.
  • Nope.

    One thing you might do is create a ServiceIsUp method that does nothing, only returning true. If your server was hung for whatever reason, your request would timeout.

  • Another solution is just to do a simple HTTP get on the URL of the service. And look at the response code (i.e. 404 etc). The cool thing about this is that you don't have to post any data or call any methods.

    Allen : If you stopped the web service, would this work? Isn't an error page shown instead of a 404?
    James : Technically if they responded with some general html stating the app is down you would be correct. So what you can do is append "?WSDL=" to the query string of the URL. If you don't get valid XML back you know for sure it's down. Most Web Service technologies support this semantic.
    James : Edit just "?WSDL" ignore the "="

Polling Long Running Windows Service from ASP.NET

We have an application that uses Lucene.NET within a windows service to reindex our app for search. Our admin section can trigger a full reindex in Lucene, and currently the only way to review its progress is through a log file written to disc.

This is clunky. We'd like to poll the service to determine the reindexing progress.

Does anyone have any insight into this?

From stackoverflow
  • Named pipes would be the way I would do cross process communication in this instance, if both processes would be running on the same machine.

    If both processes are on different machines, it gets hairier and will probably involve something along the lines of a web service communicating with the process and then asp.net calling the web service.

How can I report on files with pending changes in TFS?

I'd like to create a simple report that shows files that currently have pending changes (checked out) from a TFS 2008 server. I know that I can "Find in Source Control" option from the Team Explorer, but I rather have a reporting services report if that's possible. Ideally, I'd be able to show when the file was checked out and the user that checked it out, but that's not imperative.

If the data isn't pushed to the TFS data warehouse by default, then I'd like to find the relational table(s) in the SQL Server instance that would need to be queried.

I've spent some time digging around the TFS data warehouse and looking at all of the canned Reporting Services reports that I can get my hands on, but everything seems to be geared towards work items, check-ins associated with work items, etc...

From stackoverflow
  • If you're looking for some easy to read data and not too worried about print outs, have a look at the TFS sidekick application by Attrice. Very helpful and if you have the correct permissions, you'll be able to see all the checked out files.

    http://www.attrice.info/cm/tfs/

    Paul G : Thanks Ray. I have tried the sidekicks and can get the list from team explorer. I was hoping to get into Reporting Services so I can subscribe to a weekly report and control what columns are displayed.
  • I doubt the information you're looking for is in the data warehouse and even if it was it might not be fresh enough for your purposes. By default the warehouse is updated once an hour.

    You could use SSRS to report directly against the TFSVersionControl database but I would not recommend going this route. The database is not documented and chances are very good that it will change in the next version. It could also have performance implications if your queries are not written correctly.

    A better solution would be to use the TFS web services as your SSRS data source. There are services you can call to get all files that are checked out. This iformation is always current and the queries it runs are highly optimized.

  • Example command line (Studio 2008): "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\tf.exe" status /recursive /user:*

Adding a child node to XML in flex

In a Flex application, I'm have an xml object that I'm binding to a tree control. I'm able to add a child node to the xml but when I try to add a child to the child node it doesn't appear on the tree control

tree =  <node label="Root">
                  <node label="Category 1"/>
                  <node label="Category2"/>
                  <node label="Category3"/>
                  <node label="Category 4">
                   <node label="SubCategory4.1"/>
                   <node label="SubCategory4.2"/>
                  </node>
              </node>;           
      var someNode:XMLNode = new XMLNode(9, 'Category5');   
      var aSubNode:XMLNode = new XMLNode(9, 'SubCategory5.1');
      someNode.appendChild(aSubNode);               
      tree.appendChild(someNode);

So Category5 appears on the tree control but SubCategory5.1 does not. What am I missing?

From stackoverflow
  • If you are using flex, use AS3. XMLNode is AS2. In short, try this:

    tree = <node label="Root">
               <node label="Category 1"/>
               <node label="Category2"/>
               <node label="Category3"/>
               <node label="Category 4">
                   <node label="SubCategory4.1"/>
                   <node label="SubCategory4.2"/>
               </node>
           </node>;
    var someNode:XML = <node label="Category5"/>;
    var aSubNode:XML = <node label="SubCategory5.1"/>;
    someNode.appendChild(aSubNode);
    tree.appendChild(someNode);
    
    philcruz : That works! Thanks!

How to dynamically load a Python class

Given a string of a Python class, e.g. 'my_package.my_module.MyClass', what is the best possible way to load it?

In other words I am looking for a Class.forName() function in Python. It needs to work on Google App Engine.

Preferably this would be a function that accepts the FQN of the class as a string, and returns a reference to the class:

my_class = load_class('my_package.my_module.MyClass')
my_instance = my_class()
From stackoverflow
  • module = __import__("my_package/my_module")
    the_class = getattr(module, "MyClass")
    obj = the_class()
    
    Jason Baker : Note that this works because of a bug in the __import__ function. File paths should *not* be used in the __import__ function and will not work in python 2.6 and above: http://docs.python.org/whatsnew/2.6.html#porting-to-python-2-6
  • From the python documentation, here's the function you want:

    def my_import(name):
        mod = __import__(name)
        components = name.split('.')
        for comp in components[1:]:
            mod = getattr(mod, comp)
        return mod
    

    The reason a simple __import__ won't work is because any import of anything past the first dot in a package string is an attribute of the module you're importing. Thus, something like this won't work:

    __import__('foo.bar.baz.qux')
    

    You'd have to call the above function like so:

    my_import('foo.bar.baz.qux')
    

    Or in the case of your example:

    klass = my_import('my_package.my_module.my_class')
    some_object = klass()
    

    EDIT: I was a bit off on this. What you're basically wanting to do is this:

    from my_package.my_module import my_class
    

    The above function is only necessary if you have a empty fromlist. Thus, the appropriate call would be like this:

    mod = __import__('my_package.my_module', globals(), locals(), ['my_class'])
    klass = getattr(mod, 'my_class')
    

    Note that the last three arguments don't really do much other than provide context (so even though a list with 'my_class' in it was passed, my_class won't be imported into the local namespace).

    pjesi : I tried my_import('my_package.my_module.my_class') but get no module found my_class, which makes sense since it is a class not a module. Howver if I can use gettattr to get the class after the call to my_import
    Jason Baker : That's odd. Everything past the first dot is called using getattr. There shouldn't be any difference.
    Jason Baker : Figured it out. See the post edit.
    pjesi : Thanks I think this is the best way. Now I only need the best way to split the string 'my_pakcage.my_module.my_class' into mod_name, klass_name but I guess I can figure that out :)

How can i get a BitmapData object out of a File object from a local jpeg file in AIR?

So I am trying to create an image gallery. My AIR application accepts files that are dragged and dropped onto a TileList component. I am using the images as icons but the problem is that they take a long time to load so i want to compress the file data first (I have that part done) The problem is that I can't figure out how to open the file and put the data into a BitmapData object.

Any ideas?

From stackoverflow
  • var req:URLRequest = new URLRequest(value.file.url);
    var ldr:Loader = new Loader();
    ldr.contentLoaderInfo.addEventListener(Event.COMPLETE, completeHandler);
    ldr.load(req);
    .
    .
    .
    private function completeHandler(event:Event):void {
      var ldr:Loader = Loader(event.target.loader);
      var b:Bitmap = Bitmap(ldr.content);
    }
    

Is it possible to alter a table in mySql via stored procedures?

here's what I'd like to do in mySQL... I'm getting the feeling that this is simply not feasible, but would love to be wrong...

create procedure foo(IN MYTABLE varchar(50) , IN COLNAME varchar (50), IN MYTYPE varchar(50)) 
begin 
IF (select count(*) from information_schema.columns where table_name =MYTABLE and column_name = COLNAME) = 0 
THEN
alter table MYTABLE add column MYNAME MYTYPE; 
end;

call foo( 'table_foo' , 'column_bar' , 'varchar(100)' );
From stackoverflow
  • Short answer: No.

    Long answer: Why? Your data model should not be as fluid as you seem to think you want it to be. Lock it down. Normalize it. If you need arbitrary columnar values, use a User Defined Field pattern. Don't go messing with your tables at a whim. Not a good idea.

    Dr.Dredel : thanks! I was just trying to prevent seeing a bunch of "error column already exists" as we run the script across multiple DBs, if it's accidentally run multiple times on the same box.
    Randolpho : So this is for an install script of some sort? I wouldn't bother worrying about whether or not it's run multiple times. Handle that issue in documentation.
    Quassnoi : It's actually possible, see my post.
  • Don't know why on Earth you would want it, but it's possible:

    DELIMITER //
    DROP PROCEDURE foo//
    CREATE PROCEDURE foo(IN MYTABLE varchar(50) , IN COLNAME varchar (50), IN MYTYPE varchar(50))
    BEGIN
      SET @ddl = CONCAT('alter table ', MYTABLE, ' add column (', COLNAME, ' ', MYTYPE, ')');
      PREPARE STMT FROM @ddl;
      EXECUTE STMT;
    END;
    //
    
    Randolpho : Wow, talk about SQL injection!
    Quassnoi : It's in no way what I'd advise my children to do, but it's still possible :)

SQL Date Search without time

I have a query that searches by date. the dates in the database include the time. How do I search on just the date only.

select * from weblogs.dbo.vwlogs where Log_time between @BeginDate and @EndDAte and (client_user=@UserName or @UserName Is null) order by Log_time desc

cmd.Parameters.AddWithValue("@BeginDate", txtBeginDate.Text); cmd.Parameters.AddWithValue("@EndDAte", txtEndDate.Text);

From stackoverflow
  • Leave your sql mostly as is and just fix your parameters:

    cmd.Parameters.Add("@BeginDate", SqlDbType.DateTime).Value =
        DateTime.Parse(txtBeginDate.Text).Date;       
    cmd.Parameters.Add("@EndDAte", SqlDbType.DateTime).Value =
        // add one to make search inclusive
        DateTime.Parse(txtEndDate.Text).Date.AddDays(1);
    

    You also want to check to make sure your textboxes are valid datetimes first, but you should get the idea.

    The only caveat here is that due to a quirk with the BETWEEN operator it will match the first instant of the next day. So, to fix that we write the query like this:

    SELECT * 
    FROM vwlogs 
    WHERE Log_time >= @BeginDate AND Log_Time < @EndDate 
        AND (client_user=@UserName OR @UserName IS NULL) 
    ORDER BY Log_time DESC
    

    Pay special attention to the comparision operators around the date.

    Joel Coehoorn : Oops: it's just SqlDbType, no 's'. That's what I get for typing directly into a reply window rather than going through Visual Studio first. Fixed the post.
    Joel Coehoorn : As an aside, you should always use an explicit type: letting .Net infer your intended parameter type can lead to hard-to-find performance issues.
  • If you want to change the sql instead,

    TRUNC(Log_Time) will reduce every datetime to to that date at midnight.

    Make sure that you build your index on the column as TRUNC(Log_TIME) so it's usable.

  • In SQL round the start and end date to Whole Dates and use >= @BeginDate and very specifically < @EndDAte. The "rounding" process is not very elegant I'm afraid

    e.g.

    SELECT @BeginDate = DATEADD(Day, DATEDIFF(Day, 0, @BeginDate), 0),
           @EndDAte = DATEADD(Day, DATEDIFF(Day, 0, @EndDAte) + 1, 0)
    
    select * 
    from weblogs.dbo.vwlogs 
    where     Log_time >= @BeginDate 
          and Log_time < @EndDAte
          and (@UserName Is null OR client_user=@UserName)
    order by Log_time desc
    

    Note that I've moved "@UserName Is null" first, as there is some evidence that this test will easily pass/fail, and will cause the second more CPU intensive test (client_user=@UserName) to be ignored if the first test is TRUE (may be TommyRot of course ...)

    Also, for best performance, you should explicitly name all the columns you need, and not use "SELECT *" (but that may just have been for the purpose of this question)

    Joel Coehoorn : Order of the @username parameters doesn't matter: the optimizer should figure it out and reorder if needed.
    Portman : Technically, this is "flooring" the date, not "rounding it". http://stackoverflow.com/questions/85373/floor-a-date-in-sql-server
    Kristen : Thanks. I meant to set "@EndDAte" to Midnight FOLLOWING, not preceding. I've edited my post
  • Another gotcha - truncating your end date will NOT include that date! Consider:

    WHERE Log_Time >= @BeginDate AND Log_Time < @EndDate

    If @EndDate is truncated it will be midnight and not match anything on that day. You'll need to add a day!

  • The first thing to do is to remove the times from the dates. If you want to do this in the sql server code you can use something like the code below. I have this as a function on all the databases I work on

    cast(floor(cast(@fromdate as float)) as datetime)
    

    The next thing to worry about is the where criteria. You need to make sure you select everything from the start of the from date to the end of the to date. You also need to make sure queries for one day will work which you can do with a date add like this

    Where LogTime >= @fromdate and LogTime < DateAdd(dd, 1, @todate)
    
  • Clean up the dates by adding the following line before your query...

    select 
        @begindate=dateadd(day,datediff(day,0,@begindate),0),
        @enddate=dateadd(ms,-3,dateadd(day,datediff(day,0,@enddate),1))
    

    This will floor your begin date to the lowest possible time (00:00:00.000), and ceiling your end date to the highest possible (23:59:59.997). You can then keep your BETWEEN query exactly as it was written.

    select * 
    from weblogs.dbo.vwlogs 
    where Log_time between @BeginDate and @EndDAte 
    and (client_user=@UserName or @UserName Is null) 
    order by Log_time desc
    

    Hope this helps.

Concrete class specific methods

I have an interesting problem. Consider this class hierachy:

class Base
{
public:
   virtual float GetMember( void ) const =0;
   virtual void SetMember( float p ) =0;
};

class ConcreteFoo : public Base
{
public:
   ConcreteFoo( "foo specific stuff here" );

   virtual float GetMember( void ) const;
   virtual void SetMember( float p );

   // the problem
   void foo_specific_method( "arbitrary parameters" );
};

Base* DynamicFactory::NewBase( std::string drawable_name );

// it would be used like this
Base* foo = dynamic_factory.NewBase("foo");

I've left out the DynamicFactory definition and how Builders are registered with it. The Builder objects are associated with a name and will allocate a concrete implementation of Base. The actual implementation is a bit more complex with shared_ptr to handle memory reclaimation, but they are not important to my problem.

ConcreteFoo has class specific method. But since the concrete instances are create in the dynamic factory the concrete classes are not known or accessible, they may only be declared in a source file. How can I expose foo_specific_method to users of Base*?

I'm adding the solutions I've come up with as answers. I've named them so you can easily reference them in your answers.

I'm not just looking for opinions on my original solutions, new ones would be appreciated.

From stackoverflow
  • Add special functions to Base.

    The simplest and most unacceptable solution is to add foo_specific_method to Base. Then classes that don't use it can just define it to be empty. This doesn't work because users are allowed to registers their own Builders with the dynamic_factory. The new classes may also have concrete class specific methods.

    In the spirit of this solution, is one slightly better. Add generic functions to Base.

    class Base
    {
       ...
       /// \return true if 'kind' supported
       virtual bool concrete_specific( int kind, "foo specific parameters" );
    };
    

    The problem here is there maybe quite a few overloads of concrete_specific for different parameter sets.

  • Just cast it.

    When a foo specific method is needed, generally you know that the Base* is actually a ConcreteFoo. So just ensure the definition of class ConcreteFoo is accessible and:

    ConcreteFoo* foo2 = dynamic_cast<ConcreteFoo*>(foo);
    

    One of the reasons I don't like this solution is dynamic_casts are slow and require RTTI.

    The next step from this is to avoid dynamic_cast.

    ConcreteFoo* foo_cast( Base* d )
    {
       if( d->id() == the_foo_id )
       {
          return static_cast<ConcreteFoo*>(d);
       }
    
       throw std::runtime_error("you're screwed");
    }
    

    This requires one more method in the Base class which is completely acceptable, but it requires the id's be managed. That gets difficult when users can register their own Builders with the dynamic factory.

    I'm not too fond of any of the casting solutions as it requires the user classes to be defined where the specialized methods are used. But maybe I'm just being a scope nazi.

    strager : Would +1 if community wiki (see comments on question). Using an id system is just what RTTI does (though RTTI provides a safer method and implements things differently).
    KeithB : I agree. If you are worried about the performance of dynamic_cast (I wouldn't be), do some tests. Unless you are doing this in a tight loop, I can't imagine that it would be that expensive.
  • The CrazyMetaType solution.

    This solution is not well thought out. I was hoping someone might have had experience with something similar. I saw this applied to the problem of an unknown number of a known type. It was pretty slick. I was thinking to apply it to an unkown number of unknown type*S*

    The basic idea is the CrazyMetaType collects the parameters is type safe way, then executing the concrete specific method.

    class Base
    {
       ...
       virtual CrazyMetaType concrete_specific( int kind ) =0;
    };
    
    // used like this
    foo->concrete_specific(foo_method_id) << "foo specific" << foo_specific;
    

    My one worry with this solution is that CrazyMetaType is going to be insanely complex to get this to work. I'm up to the task, but I cannot count on future users to be up to be c++ experts just to add one concrete specific method.

  • The cstdarg solution.

    Bjarn Stroustrup said:

    A well defined program needs at most few functions for which the argument types are not completely specified. Overloaded functions and functions using default arguments can be used to take care of type checking in most cases when one would otherwise consider leaving argument types unspecified. Only when both the number of arguments and the type of arguments vary is the ellipsis necessary

    class Base
    {
       ...
       /// \return true if 'kind' supported
       virtual bool concrete_specific( int kind, ... ) =0;
    };
    

    The disadvantages here are:

    • almost no one knows how to use cstdarg correctly
    • it doesn't feel very c++-y
    • it's not typesafe.
  • Could you create other non-concrete subclasses of Base and then use multiple factory methods in DynamicFactory?

    Your goal seems to be to subvert the point of subclassing. I'm really curious to know what you're doing that requires this approach.

  • If the concrete object has a class-specific method then it implies that you'd only be calling that method specifically when you're dealing with an instance of that class and not when you're dealing with the generic base class. Is this coming about b/c you're running a switch statement which is checking for object type?

    I'd approach this from a different angle, using the "unacceptable" first solution but with no parameters, with the concrete objects having member variables that would store its state. Though i guess this would force you have a member associative array as part of the base class to avoid casting to set the state in the first place.

    You might also want to try out the Decorator pattern.

  • You could do something akin to the CrazyMetaType or the cstdarg argument but simple and C++-ish. (Maybe this could be SaneMetaType.) Just define a base class for arguments to concrete_specific, and make people derive specific argument types from that. Something like

    class ConcreteSpecificArgumentBase;
    
    class Base
    {
       ...  
       virtual void concrete_specific( ConcreteSpecificArgumentBase &argument ) =0;
    };
    

    Of course, you're going to need RTTI to sort things out inside each version of concrete_specific. But if ConcreteSpecificArgumentBase is well-designed, at least it will make calling concrete_specific fairly straightforward.

  • The cast would be faster than most other solutions, however:

    in Base Class add:

    void passthru( const string &concreteClassName, const string &functionname, vector<string*> args )
    {
        if( concreteClassName == className )
             runPassThru( functionname, args );
    }
    
    private:
        string className;
        map<string, int> funcmap;
        virtual void runPassThru( const string &functionname, vector<string*> args ) {}
    

    in each derived class:

    void runPassThru( const string &functionname, vector<string*> args )
    {
       switch( funcmap.get( functionname ))
       {
          case 1:
              //verify args
              // call function
            break;
          // etc..
       }
    }
    
    // call in constructor
    void registerFunctions()
    {
          funcmap.put( "functionName", id );
          //etc.
    }
    
  • The weird part is that the users of your DynamicFactory receive a Base type, but needs to do specific stuff when it is a ConcreteFoo.

    Maybe a factory should not be used.

    Try to look at other dependency injection mechanisms like creating the ConcreteFoo yourself, pass a ConcreteFoo type pointer to those who need it, and a Base type pointer to the others.

  • The context seems to assume that the user will be working with your ConcreteType and know it is doing so.

    In that case, it seems that you could have another method in your factory that returns ConcreteType*, if clients know they're dealing with concrete type and need to work at that level of abstraction.