I admit I have been known to be quite verbally colourful when I am passionate about a topic. Well, ok, even in common dialogue.
I was using corporate instant messaging chatting with a colleague and friend and I slipped into my colourful language. I instantly got an inline message saying the chat wasn’t sen’t and when I repeated the chat just to be sure of what I saw I got a pop-up from a bot mentioning policy. “OK Mom, I won’t do that again at least in corporate chat.” Yes, I was completely in the wrong in not complying to policy and was issued the appropriate level of reprimand and no doubt if I became a repeat offender there is a fairly weighted escalation process in place.
I had to laugh because it had me recall the evolution in content filtering since the early days of proxy firewalls and some of my early experience in implementation (not offender) in the space. In the mid-90s ago a colleague of mine owned an EMAIL content filtering project. This was when the first generation SMTP content filtering products hit the market. One of his tasks was to produce a list of culturally diverse derogatory words for the external EMAIL filter to block and/or notify. I recall him interviewing colleagues of various backgrounds to insure his list was comprehensive. There were two words I suggested he not include because I was sure that no EMAIL would flow because the system would come to a grinding halt if it filtered – fuck and shit.
Sure enough I was right but of course the comprehensive list was to blame as well. Though I do have to say once we removed those two words EMAIL did flow but still quite slowly. It was decided we should just filter what was being sent external and not what was being received. This was almost fifteen years ago.
Shortly after still in the 90s, I was the project leader for a web URL filtering solution that included a subscription service of web site categorization supposedly vetted by legal experts. Of course that lent itself to being able to filter for many policies. I told the policy custodians my focus was security and not the arbiter of what is porn and what is art nor what is acceptable language – they are. I also wanted to know if there was a policy violation caught by the system what internal web link should the violation page point to and the contact info for any complaints because it wasn’t going to be me.
Today’s content filtering technology is vastly improved having culminated into comprehensive DLP suites and constant evolution of heuristic based solutions. But while the technology has evolved, have corporate cultures evolved to make intelligent use of this technology and proper application of it to intellectual capital that is at risk given finite resources and funding?
Here is a way to self assess. Quantify the resource you currently spend on content filtering for policy violations that have a minimal risk to the business versus policy violations that have a significant measurable risk to the business. Hopefully, you have the luxury of a CRO that arbitrates where policies compete and/or align.
Perhaps I’m a bit cheeky in suggesting a company shouldn’t give as much of a shit about blocking the word fuck in an EMAIL/chat to a colleague/friend but should give two shits about dialogue I have with someone externally in EMAIL/chat on intellectual property.
Of course, all opinions are my own and apparently no editors used : )
For what it’s worth,