Dell Desktop / Notebook Linux Engineering position available

Come help Dell ensure Linux “just works!” on Dell notebooks, desktops, and devices! The Dell Client Linux Engineering team has opening for a Senior Software Engineer. This team works closely with the Linux community, device manufacturers, and Dell engineering teams to provide the best Linux experience across the entire client product line.

Visit the Dell Jobs site to apply. If you’re a friend of mine and are interested, drop me a line and I’ll make sure you get in front of the hiring manager quickly!

Dell’s Linux Engineering team is hiring

Dell’s Linux Engineering team, based in Austin, TX, is hiring a Senior Principal Engineer. This role is one I’ve previously held and enjoyed greatly – ensuring that Linux (all flavors) “just works” on all Dell PowerEdge servers. It is as much a relationship role (working closely with Dell and partner hardware teams, OS vendors and developers, internal teams, and the greater open source community) as it is technical (device driver work, OS kernel and userspace work). If you’re a “jack of all trades” in Linux and looking for a very senior technical role to continue the outstanding work that ranks Dell at the top of charts for Linux servers, we’d love to speak with you.

The formal job description is on Dell’s job site. If you’d like to speak with me personally about it, drop me a line.

s3cmd 1.5.2 – major update coming to Fedora and EPEL

As new upstream maintainer for the popular s3cmd program, I have been collecting and making fixes all across the codebase for several months. In the last couple weeks it has finally gotten stable enough to warrant publishing a formal release. Aside from bugfixes, its primary enhancement is adding support for the AWS Signature v4 method, which is required to create S3 buckets in the eu-central-1 (Frankfurt) region, and is a more secure request-signing method usable in all AWS S3 regions. Since releasing s3cmd v1.5.0, python 2.7.9 (as exemplified in Arch Linux) added support for SSL certificate validation. Unfortunately, that validation broke for SSL wildcard certificates (e.g. *.s3.amazonaws.com). Ubuntu 14.04 has an intermediate flavor of this validation, which also broke s3cmd. A couple quick fixes later, and v1.5.2 is published now.

I’ve updated the packages in Fedora 20, 21, and rawhide. EPEL 6 and 7 epel-testing repositories has these as well. If you use s3cmd on RHEL/CentOS, please upgrade to the package in epel-testing and give it karma. Bug reports are welcome at https://github.com/s3tools/s3cmd.

Spamfighting: updated opendmarc packages, handling DMARC p=reject

I took a few months off from dealing with my spam problems, choosing to stick my head in the sand. Probably not my wisest move…

In the interim, the opendmarc developers have been busy, releasing version 1.3.0, which also adds the nice feature of doing SPF checking internally. This lets me CLOSE WONTFIX the smf-spf and libspf2 packages from the Fedora review process and remove them from my system. “All code has bugs. Unmaintained code with bugs that you aren’t running can’t harm you.” New packages and the open Fedora review are available.

I’ve also had several complaints from friends, all @yahoo.com users, who have been sending mail to myself and family @domsch.com. In most cases, @domsch.com simply forwards the emails on to yet other mail provider – it’s providing a mail forwarding service for a vanity domain name. However, now that Yahoo and AOL are publishing DMARC p=reject rules, after smtp.domsch.com forwarded the mail on to its ultimate home, those downstream servers were rejecting the messages (presumably on SPF grounds – smtp.domsch.com isn’t a valid mail server for @yahoo.com).

My solution to this is a bit akward, but will work for a while. Instead of forwarding mail from domains with DMARC p=reject or p=quarantine, I now store them and serve them up via POP3/IMAP to their ultimate destination. I’m using procmail to do the forwarding:


DEFAULT="/home/mdomsch/Mail/"
SENDER=`formail -c -x Return-Path`
SENDMAILFLAGS="-oi -f $SENDER"

# forward all mail except dmarc policy reject|quarantine.
:0 H
* ? formail -x'From:' | grep -o '[[:alnum:]+\.\_\-]*@[[:alnum:]+\.\_\-]*' | xargs opendmarc-check | egrep -s 'Domain policy: (reject|quarantine)'
${DEFAULT}

:0
! mdomsch@example.com

This introduces quite a bit of latency (on the order of an hour) for mail delivery from my friends with @yahoo.com addresses, but it keeps them from getting rejected due to their email provider’s lousy choice of policy.

Tim Draegen, the guy behind the excellent dmarcian.com, is chairing a new IETF working group focusing on proper handling on “indirect email flows” such as mailing lists and vanity domain forwarding. I’m hoping to have time to get involved there. If you care, follow along on their mailing lists.

I’m choosing to ignore the fact that domsch.com is getting spoofed 800k times a week (as reported by 8 mail providers and visualized nicely on dmarcian.com), at least for now. I’m hoping the new working group can come up with a method to help solve this.

Do your friends use a mail service publishing DMARC p=reject? Has it caused problems for you? Let me know in the comments below.

[REPOST] Who am I?

I’ve started blogging again on the Dell TechCenter site, Enterprise Mobility Management section, along with the rest of my team.

Here’s the intro to my first post, “Who am I?”:

The existential question, asked by everyone and everything throughout their lifetimes – who am I? High school seniors choosing a college, college seniors considering grad school or entering the job market, adults in the midst of their mid-life crisis—the question comes far easier than the answer.

In the world of technology, who you are depends on the technology with which you are interacting. On Facebook, you are your quirky personal self, with pictures of your family and vacations you take. On LinkedIn, you are your professional self, sharing articles and achievements that are aligned with your career.

What about on the myriad devices you carry around? On the smartphone in my pocket, I have several personas—personal, business, gamer (my kids borrow my phone), constantly context-switching between them. In the not-too-distant past, people would carry two phones—one for personal use and one for work, keeping the personas separate via physical separation—two personas, two devices.

Read more…

Ottawa Linux Symposium needs your help

If you have ever attended the Ottawa Linux Symposium (OLS), read a paper on a technology first publicly suggested at OLS, or use Linux today, please consider donating to help the conference and Andrew Hutton, the conference’s principal organizer since 1999.

I first attended OLS in the summer of 2003. I had heard of this mythical conference in Canada each summer, a long way from Austin yet still considered domestic rather than international for the purposes of business travel authorization, so getting approval to attend wasn’t so hard. I met Val on the walk from Les Suites to the conference center on the first morning, James Bottomley during a storage subsystem breakout the first afternoon, Jon Masters while still in his manic coffee phase, and countless others that first year. Willie organized the bicycle-chain keysigning that helped people put faces to names we only knew via LKML posts. I remember meeting Andrew in the ever-present hallway track, and somehow wound up on the program committee for the following year and the next several.

I went on to submit papers in 2004 (DKMS), 2006 (Firmware Tools), 2008 (MirrorManager). Getting a paper accepted meant great exposure for your projects (these three are still in use today). It also meant an invitation to my first exposure to the party-within-the-party – the excellent speaker events that Andrew organized as a thank-you to the speakers. Scotch-tastings with a haggis celebrated by Stephen Tweedie. A cruise on the Ottawa River. An evening in a cold war fallout shelter reserved for Parliament officials with the most excellent Scotch that only Mark Shuttleworth could bring. These were always a special treat which I always looked forward to.

Andrew, and all the good people who helped organize OLS each year, put on quite a show, being intentional about building the community – not by numbers (though for quite a while, attendance grew and grew) – but providing space to build deep personal connections that are so critical to the open source development model. It’s much harder to be angry about someone rejecting your patches when you’ve met them face to face, and rather than think it’s out of spite, understand the context behind their decisions, and how you can better work within that context. I first met many of the Linux developers face-to-face at OLS that became my colleagues for the last 15 years.

I haven’t been able to attend for the last few years, but always enjoyed the conference, the hallway talks, the speaker parties, and the intentional community-building that OLS represents.

Several economic changes conspired to put OLS into the financial bind it is today. You can read Andrew’s take about it on the Indiegogo site. I think the problems started before the temporary move to Montreal. In OLS’s growth years, the Kernel Summit was co-located, and preceded OLS. After several years with this arrangement, the Kernel Summit members decided that OLS was getting too big, that the week got really really long (2 days of KS plus 4 days of OLS), and that everyone had been to Ottawa enough times that it was time to move the meetings around. Cambridge, UK would be the next KS venue (and a fine venue it was). But in moving KS away, some of the gravitational attraction of so many kernel developers left OLS as well.

The second problem came in moving the Ottawa Linux Symposium to Montreal for a year. This was necessary, as the conference facility in Ottawa was being remodeled (really, rebuilt from the ground up), which prevented it from being held there. This move took even more of the wind out of the sails. I wasn’t able to attend the Montreal symposium, nor since, but as I understand it, attendance has been on the decline ever since. Andrew’s perseverance has kept the conference alive, albeit smaller, at a staggering personal cost.

Whether or not the conference happens in 2015 remains to be seen. Regardless, I’ve made a donation to support the debt relief, in gratitude for the connections that OLS forged for me in the Linux community. If OLS has had an impact in your career, your friendships, please make a donation yourself to help both Andrew, and the conference.

Visit the OLS Indigogo site to show your respect.

Web Software Developer role on my Dell Software team

My team in Dell Software continues to grow. I have two critical roles open now, and am looking for fantastic people to join my team. The first I posted a few weeks ago, is for a Principal Web Software Engineer, a very senior engineering role. This second one is for a Java web application developer (my team is using JBoss Portal Server / GateIn for the web UI layer). Both give you the chance to enjoy life in beautiful Austin, TX while working on cool projects! Let me know if you are interested.

http://jobs.dell.com/round-rock/engineering/jobid5450194-web-software-engineer-jobs

Web Software Engineer- Round Rock, TX

Dell Software empowers companies of all sizes to experience Dell’s “Power to Do More” by delivering scalable yet simple-to-use solutions to drive value and accelerate results. We are uniquely positioned to address today’s most pressing business and IT challenges with holistic, connected software offerings. Dell Software Group’s portfolio now includes the products and technologies of Quest Software, AppAssure, Enstratius, Boomi, KACE and SonicWALL. For more information, please visit http://software.dell.com/.

This role is for a unique and talented technical contributor in the Dell Software Group responsible for a number of activities required to design, develop, test, maintain and operate web software applications built using J2EE and JBoss GateIn portal.

Role Responsibilities
-Develop server software in the Java environment
-Create, maintain, and manipulate web application content in HTML, CSS, JavaScript, and Java languages
-Unit test web application software on the client and on the server
-Provide guidance and support to application testing and quality assurance teams
-Contribute to process diagrams, wiki pages, and other documentation
-Work in an Agile software development environment
-Work in a Linux environment

Requirements
– Strong technical and creative skills as well as an understanding of software engineering processes, technologies, tools, and techniques
-Red Hat JBoss Enterprise Application Platform (or similar J2EE frameworks), JBoss GateIn, HTML5, CSS, JavaScript, jQuery, SAML, and web security
-3+ years of client or server Java experience
-JBoss Portal Server applications and/or other web applications
-Posses visual web application design skills
-Implementation knowledge in internationalization and localization
-HTML, CSS, JavaScript, RESTful interface design and utilization skills, OAuth or other token-based authentication and authorization

Preferences
-Experience with Atlassian products, continuous integration
-Be willing and able to learn new technologies and concepts
-Demonstrate resourcefulness in resolving technical issues.
-Advanced interpersonal skills and be able to work independently and as part of the team.

Life At Dell
Equal Employment Opportunity Policy
Equal Opportunity Employer/Minorities/Women/Veterans/Disabled

Job ID: 14000MXW – See more at: http://jobs.dell.com/round-rock/engineering/jobid5450194-web-software-engineer-jobs#sthash.2XYIl7CC.dpuf

Job opening on my team in Dell Software

One of the things I’ve started to enjoy as a people manager at Dell, more than as an “individual contributor”, is that I have a lot of say in what skills my team needs, and can look for people I really want to have on my team. I’ve got one such opening posted now, for a Principal Web Software Engineer. Please contact me if you or someone you know would be a good fit.

Principal Web Software Engineer- Round Rock, TX

Dell Software empowers companies of all sizes to experience Dell’s “Power to Do More” by delivering scalable yet simple-to-use solutions to drive value and accelerate results. We are uniquely positioned to address today’s most pressing business and IT challenges with holistic, connected software offerings. Dell Software Group’s portfolio now includes the products and technologies of Quest Software, AppAssure, Enstratius, Boomi, KACE and SonicWALL. For more information, please visit http://software.dell.com/.

Come work on a dynamic team creating software that impacts enterprise customers on a day-to-day basis. Great
opportunity to work on cutting edge technology using evolving development tools and methods.

This role is for a unique and talented technical contributor in the Dell Software Group responsible for a number of activities required to design, develop, test, maintain and operate web software applications built using J2EE and JBoss GateIn portal.
The position requires strong technical and creative skills as well as an understanding of software engineering processes, technologies, tools, and techniques including: Red Hat JBoss Enterprise Application Platform (or similar J2EE frameworks), JBoss GateIn, HTML5, CSS, JavaScript, jQuery, SAML, and web security.

Role Responsibilities
-Develop high-level and low-level design documents for modules of the web application
-Provide technical leadership to a team of software engineers
-Develop server software in the Java environment
-Create, maintain, and manipulate web application content in HTML5, CSS, and JavaScript
-Unit test web application software on the client and on the server
-Provide guidance and support to application testing and quality assurance teams
-Support field sales on specific customer engagements
-Work in an Agile software development environment
-Work in a Linux environment
-Experience with Atlassian products, continuous integration,
-Experience with Integration Platform-as-a-Service (Dell Boomi)

Requirements
-8+ years of experience designing and developing Java-based web applications
– Have a deep understanding of the components and capabilities of J2EE and be able to help make architectural decisions on component choices
– Posses visual web application design skills, implementation knowledge in internationalization and localization, HTML, CSS, JavaScript, RESTful interface design and utilization skills, OAuth or other token-based authentication and authorization
-Willing and able to learn new technologies and concepts
-Demonstrate resourcefulness in resolving technical issues
-Advanced interpersonal skills and be able to work independently and as part of the team.

Job ID: 14000IXI

Spamfighting: Bulk sending

In parts 1 and 2 of this series, I’ve explored the current best practices for authenticating outbound email, validating inbound email, and my own system configurations for such.

When not busy with my day job, I also serve on the board of our neighborhood youth basketball league as registrar and co-webmaster. As part of these roles, I maintain the email infrastructure, and send most of the announcement emails.

I had made a point of “warming up” the IP address for this server. I’ve been using this same IP for several months, and it has a returnpath.com Sender Score of 99, so I thought I’d be in the clear.

We have about 1500 parent email addresses on our announcement mailing list, with over 160 of them going to a single domain – austin.rr.com. No surprise there – Roadrunner is a very popular ISP. The problem is this: Roadrunner’s mail servers don’t appreciate when my mail server sends an email, via this mailing list, to their inbound MX mail server. Even after mailman splits the message up so there are only 10 recepients per message. The first few messages get through, the rest get put on hold (SMTP 4.x.x try again later), allowing only a trickle of messages per hour from my one IP address.

During the Austin Snowpocalypse last week, we needed to get an announcement out to our parents, that, because schools were closed, and because we rent all our court space from the area schools, our practices and games had to be cancelled for the night. I sent that note around noon. It took until 6pm before all the @austin.rr.com emails were allowed through – just in time to get notice of a cancelled 6pm event. Note – my message came from a valid SPF mail server, had a valid DKIM key attached, and the DMARC policy is “none”, so it wasn’t blocked at that level. It was blocked because my mail server’s IP address isn’t allowed to send more than a few messages to Roadrunner subscribers each hour.

Roadrunner does provide a way for you to request relaxed rate limits for your IP. I followed their process, which uses Return Path’s Feedback Loop Management service, but my request was denied, no explaination given. Perhaps they know it’s a cloud service IP, which in theory could be given to another customer at any moment. I’ll file a request with my ISP to see if they’ll sign up with ReturnPath to be responsible for their netblocks on behalf of their customers. Not sure how well that’ll go over – it could make a lot of work for the ISP mail technicians.

One other alternative is to use an outbound mail service such as Amazon Simple Email Service, Mailchimp or SendGrid, in order to get my mails out to our player’s families in a timely manner. Mailchimp appears to have the disadvantage of needing to migrate all my mailing lists to them, instead of my existing GNU Mailman setup. SendGrid has better pure SMTP integration, and with some sendmail smarttable hacking, I could probably make that work. All three involve some increased cost to us, in the months I send a lot of announcements.

Do you send bulk mail from a cloud service? How do you ensure your mails get through? Leave your comments below.

Spamfighting: mail server configuration

In part 1 of this series, I relayed a bit of my story about my use of SPF, DKIM, and DMARC to try to reduce the spam being sent as if from my personal domain, while increasing the odds that legitimate mail from my domain gets through.

In this part, I describe how these are actually implemented in my case.

First, let me describe my email setup. I have one cloud-hosted server, smtp.domsch.com, through which all authentic *@domsch.com email is sent. Senders may be either local to this server (such as postmaster@ which sends the DMARC reports to other mail servers), or may be family members who use a hosted email service (as it happens, all use GMail) as their Mail User Agent. Users make an authenticated connection to smtp.domsch.com, which then DKIM-signs the messages and sends them on toward their destination MX server. These users may also be subscribed to various mailing lists which would break (fail to get their legitimate message through to the expected receivers) if SPF policy were anything except softfail.

As such, my SPF record looks like:
@ 7200 IN TXT "v=spf1 include:_spf.google.com a mx ~all"

Outbound, user-authenticated mail from *@domsch.com should be treated differently than inbound mail. Outbound mail requires only a DKIM milter to sign each message. Messages are signed with a DKIM key, published in my DNS:
default._domainkey.domsch.com. 7200 IN TXT "v=DKIM1\; k=rsa\; s=email\; p=(some nice long hex string)"

I publish a DMARC DNS record so I can get reports back from DMARC-compliant servers.
_dmarc.domsch.com. 7200 IN TXT "v=DMARC1\; p=none\;
rua=mailto:dmarc-aggregate@domsch.com\; ruf=mailto:dmarc-forensics@domsch.com\;
adkim=r\; aspf=r\; rf=afrf "

Inbound mail to *@domsch.com should pass each message through an SPF milter which adds a Received-SPF header, a DKIM milter to check the validity of a DKIM-signed message which adds an Authentication-Results header, and the DMARC milter which decides what to do based on the results of these other two headers, and sends results to DMARC senders.

smtp.domsch.com runs CentOS 6.x, sendmail, and a variety of milters. On outbound mail, it runs opendkim. On inbound mail, it runs smf-spf, opendkim, and opendmarc, before sending it on to its final destination. My sendmail.mc file is configured as such to allow the different milters to run depending on direction – outbound or inbound:

FEATURE(`no_default_msa', `dnl')dnl
DAEMON_OPTIONS(`Port=submission, M=Ea, Name=MSA, Family=inet6, InputMailFilters=opendkim')dnl
DAEMON_OPTIONS(`Port=smtp,Name=MTA, Family=inet6')dnl
define(`confMILTER_MACROS_HELO', confMILTER_MACROS_HELO`, {verify}')dnl
INPUT_MAIL_FILTER(`smf-spf', `S=inet:8890@127.0.0.1, T=S:30s;R:1m')dnl
INPUT_MAIL_FILTER(`opendkim', `S=inet:8891@127.0.0.1')dnl
INPUT_MAIL_FILTER(`opendmarc', `S=inet:8893@127.0.0.1')dnl

Why do the milters listen on a local TCP socket, instead of a UNIX domain socket? Simply, they don’t yet have SELinux policies in place that let them use a domain socket. Once these packages are properly reviewed and included in Fedora/EPEL, we will adjust the listening port to be a domain socket.

Of these milters, opendkim and opendmarc seem to be properly maintained still. smf-spf, for its whole ~1000 lines of code, has been largely untouched since 2005, and its maintainer seems to have
completely fallen off the Internet. All my attempts to find a valid address for him have failed. There are a variety of other SPF filters, the most popular of which is python-postfix-policyd-spf – which as the name implies is postfix-specific, and as I noted, I’m not running postfix. Call me lazy, but sendmail works well enough for me at present.

These milters are currently under review (smf-spf, libspf2, opendmarc) in Fedora and will eventually land in the EPEL repositories as well. opendkim is already in EPEL.

If you are using SPF, DKIM, and DMARC, what does your configuration look like? Please leave a comment below.