Mistakes Were Made

Now that we’re coming up on the end of this semester, now would be a good time to start addressing the errors that we made when taking our survey and how we may have fixed them. Without further ado, my thoughts on the matter.

The first thing that comes to mind as I’m looking is simply that I do not think we had enough time to prepare a good survey to send for IRB approval. This is something I’ve brought up before, but I do it because it is something that affected our work heavily throughout the entirety of this semester.

There are many concepts of research that we did not sufficiently understand until later (particularly those that became more clear as we looked at more secondary studies and how they were conducted) that an experienced researcher will probably be able to pick out based on our final study that we write out. The greatest among these is our research questions, which are not the greatest research questions in hindsight.

This problem is particularly hard to help because of the timeframe that we are working on for this class. We only have the semester to plan and conduct our survey, so that doesn’t leave a lot of time for book learning and secondary research before we need to start cranking things out on the primary research end. My suggestion (which I will include in my course evaluation) would be as follows.

If our biggest obstacle is the time frame we are working with, I would simply suggest extending the time frame for our learning. I’m not sure if it would be reasonable to have confirmed client’s months in advance for these classes, but that’s not quite what I think should happen. I think it would help all of us if we had a brief time focused on learning what is in and reading the kinds of studies we would be trying to write in a previous course, such as Intro to PR or PR writing. If we could, prior to this class, learn what good research questions are and how to make them ourselves, this would greatly improve the quality of our surveys that we submitted to the IRB.

In the grand scheme of things, that we made mistakes is not terrible. Honestly this will give us and future researchers an opportunity to learn from what we have done, not only what we did correctly, but also what we may have done poorly. The story continues, regardless of our presence in it.

Crisis Management – A Constant Process

This blog was prepared for Inspired Strategies Agency. It can be viewed at the ISA website here.

The internet really is an amazing thing. The speed with which it has allowed information to travel across countries and borders is astonishing, not to mention it has made some of our current favorites PR tools of today possible (essentially every social media outlet ever). That speed of information is a double-edged sword however, as not very bit of information about your organization will be something you want to scream from the hills. Sometimes that information can trigger a corporate crisis, and when that happens you’d best be prepared to manage it.

I think it is telling that we PR professionals prefer the term crisis management to “crisis aversion” or “crisis resolution.” The funny thing about most crises that you encounter is that they don’t just come and go. Most of the crises that your company experiences you will probably be able to see coming from a mile away, and even when you’ve gotten through the brink of things there will still be people willing to hold on to that crisis. We can see examples, but a prominent one today would be  with Donald Trump and the vulgar remarks he made regarding places he could grab women. This was said in 2005, and the recording only resurfaced in 2016, but this is something that Trump’s PR team is doubtlessly fighting against every day.

Now realizing the potential longevity of any major crises, I think it is important to also know when they are inevitable. When your company messes up, it doesn’t matter how discreetly that was, the internet will eventually find out – it’s just what happens these days. We must consider what the best course of action will be to address the issue. Sometimes it may be more prudent to resolve the issue silently, making sure that any injured parties are suitably recompensed and assuaged. Other times you might not have that pleasure. Perhaps the problem is far too large to just solve quietly. In this case it is probably smarter to get the first word in to the public before the press does.

Transparency is one of the most important qualities you must exercise in crisis management, and if everyone is learning about your crisis weeks after the matter through a broken news story, you aren’t going to look very transparent. Consider Equifax’s major security breach earlier this year, where more than a hundred thousand people’s credit information was stolen. They didn’t say a word to the public for more than a month, and naturally people perceived this as shady. Equifax looked like they had something to hide, and they were only announcing it this late because they knew they’d have to announce it eventually. What really happened was probably far more complicated than that, with weeks of investigation trying to determine the severity of the data breach, but that doesn’t matter one bit. The public will not be compassionate and understanding in the midst of a crisis. They want answers, and any excuses you can offer will be just that.

What I’ve said above isn’t quite all-encompassing in the art of crisis management. In truth, there are a variety of different kinds of crises that your organization will encounter – not just ones brought about by your company’s own mistakes. Regardless of the nature of your crisis however, you must remember that your crisis management needs to be three things. It must be accurate, it must be timely, and it must protect your organization in whatever way you ethically can.

The Takeaways from Survey Taking

As we move from Sunday the 19th to Monday the 20th, the window for taking our surveys in Public Relations Research has ended, and what a survey period it’s been.

Over this amount of time we’ve faced difficulties with wording questions, getting approval from our university’s institutional review board, and most prominently in meeting our final survey target of 500 surveys taken. So did we make the cut? Sadly, not quite.

In the end, our final count as of Sunday night is around 300 surveys taken, adding together the ones we have taken at various community arts minded events and what was taken on our survey monkey link. The majority of those were taken online, even accounting for surveys that were given using the survey monkey link at events. While we may not have gotten the 500 we aimed for, I do think that 300 isn’t anything to scoff at, and I think it should still be sufficient for our data (although our study will have to account for a larger margin of error due to the smaller sample).

There were things that I learned through this process however, and those takeaways should be invaluable should I ever need to perform this type of research again. While we experienced some minor success giving surveys at various CAC events, there were some hiccups that prevented us from giving as many surveys as we would have liked. For one, most of the events we went to were the kind were our target audience would be bringing children that they needed to look after. Suffice to say, people do not want to take surveys when they are trying to keep an eye on their six-year-olds. In the future I would like to be more selective about the events that I select for giving surveys at.

Another thing to note is that our greatest bump in surveys came online when our client shared the survey for us. Over the following day or two we received the majority of our replies. I think it is worth noting just how effective the power of endorsement is, especially that of an opinion leader in your demographic.

All in all, delivering this survey has been a very educational experience, and I most definitely feel better prepared for when I may need to do such a survey again. In the meantime, it’s about time I started looking into data interpretation.

Research Pitfalls to Watch Out For

For my blog this week, I thought I might take some time to look for common errors in the clinical research process. These are some of the kinds of things that I feel like our class would have done well to know when we all started this process. Without further ado, here are some fun ones I found.

  1. Doing Work You Didn’t Have to Do

There were many errors I found that really boiled down to this broadly encompassing mistake – that of doing work that other people have already done for you. Before you begin research project you should make sure that your project is one that hasn’t been done before. You should be doing secondary research before initiating your primary research, and that entails making sure others haven’t already asked your research question. Imagine if you conducted a comprehensive and expensive survey project, learning many useful things as you went, but at the end of the project you learned that another organization in the community performed a similar project and answered the same questions your study asked, then they published their results for the rest of the community to see. You’d feel rather silly wouldn’t you.

  1. Failing to Account for Bias

When performing any sort of survey-based research, the researcher needs to remember that all of your participants will have bias. One could view research as determining what a specific bias of a population is, but people are more than just one bias. We are a confusing collection of opinions, and research needs to account for all of these. As our PR Research professor likes to remind us, if we go and give our surveys at Starbucks, that is a specific population that we are surveying. There needs to be enough variance and randomization in your survey sample that you can reasonably assess that the group you surveyed was diverse enough to not simply be the ones who thought one particular way in one particular area.

  1. Acknowledge Your Weaknesses!

We ought to remember, especially if our research is going to be published, that every study has flaws that can influence the way meaning might be derived from the data. If anyone (whether it’ us or somebody else) intends to utilize that data, they need to know the potential flaws in the data. Listing your weaknesses also provides opportunities for future studies to improve upon your own research. The academic community is precisely that – a community.

These are all valuable things to know when planning out your own research. What are some other common flaws that you have seen in different kinds of research?

The Problem with Going Automatic

This blog was written for Inspired Strategies Agency. When it is posted, you can view the blog at ISA by clicking here.

Before going on, I would like to point out that, while not the subject of this post, reference will be made to the tragic church shooting in Texas which took place this last Sunday.

The NRA seems to have shot itself in the foot on accident, as far as social media is concerned.

This last Sunday, the NRA made a rather innocuous post to its Twitter, a simple tweet about gun vocabulary and terminology (pictured below). The article it links to isn’t necessarily offensive. It’s a bit right leaning (referencing the misconceptions of “the liberal media”) but there isn’t really anything wrong with it – if anything it’s an accurate bit of information that people ought to know if they’re wanting to talk about firearms in any capacity.

Screen Shot 2017-11-06 at 7.04.50 AM

The problem was not the content they posted so much as when they posted it though… just as news was starting to go around about the church shooting which occurred in Texas this past Sunday, November 5.

It was only a few minutes after the news started to become widespread that the NRA’s tweet went out. People were paying attention to them, wanting to see how they would react to the most recent occurrence of gun-related violence. When the NRA tweeted “Correct and protect the record. Please help folks get these terms right,” people were angry to say the least.

While things may seem cut and dry on the surface though, anybody who has had experience planning corporate social media can tell you what happened, and they’re probably cringing the whole way through this story. In our profession we use a variety of online tools to plan social media content ahead of time. We let those tools post it automatically at pre-determined dates and times. It just so happened that the NRA had scheduled to share this post at this particular time, and by complete happenstance it was the worst possible time to share it. This is not a case of conviction, but rather one of being in the wrong place at the wrong time.

Unfortunately for the NRA, the way corporate social media works is not exactly widespread knowledge. What was originally just them sharing a cheeky blog about gun terminology has been interpreted widely as a misplacement of priorities, appearing more concerned over the sanctity of vocabulary than the loss of life. This isn’t what happened, but that’s sure what it looked like, and most people won’t give the benefit of the doubt after that.

What happened here is not really the NRA’s fault, but it is something that we should learn from. We never post in a vacuum, and that should go doubly for content that we are having posted automatically. We should make it common practice (especially when embroiled in heavy political issues) to make sure someone is “checking the waters” before we let our content out into the open. The best crisis plan is the kind that keeps you out of a crisis altogether.

How would you prevent something like this happening to your organization, and if it were to happen anyway, how would you react to it?

“People do Business with People”

This last Saturday I had the privilege of hearing Brian Basilico speak on social media at a gathering for PRSA Urban Chicago.  Brian Basilico is the area’s resident social media guru, and as usual he came with a veritable wealth of information to share.

While Basilico’s content seems more focused on businesses utilizing social media with the goal of making the sale, every phase up to that point is concretely rooted in public relations ideals. To this end, I’m a rather big fan of Basilico’s three truths of social media.

#1 – Businesses do not do business with other businesses. People do business with people. Relationships are the currency of business

#2 – Social media is a medium, but social networking is a relationship. If you post great content for the right people, they will pay attention to you.

#3 People do business with people that they know, LIKE, and trust. Trust is where the transaction happens, and it must be earned.

I subscribe heavily to these three truths, not just because I feel like they are accurate, but also because I feel that they show the utility of public relations in any business. One might think that if the only thing that a company is pursuing is the dollar and nothing else, they might not need to push relationship building as much, but the truth couldn’t be farther. In the age of social media, relationship marketing is just as, if not more important that product marketing.

These three truths recognize that at the core of every business transaction is people, and work their way out from there. The first truth shows us that as rigid as a company’s corporate structure may be, the decisions and interactions are still being carried out by real people. The second tells us that posting good content for the right people will earn you followers, and net you their attention. The third truth then shows us what to do with that attention. It’s all about earning the consumer’s trust, and once that is accomplished you can go for a successful sale. These truths don’t show us how to do social media marketing, but they do show us why we do them.

Sometimes we need to be reminded of what exactly it is we are building towards. In the end public relations is the art of selling a relationship. Sometimes we just need to look beyond the numbers and statistics to remember that.

Risks in Research – Everyone Takes Them

Whenever you are performing any kind of research, there is always a risk that you may not find what you are looking for. Ideally, we try to minimize this risk by covering our bases. By making sure that the research we are basing any study on is sound, we increase the likelihood of us finding information that is both accurate and usable. Sometimes we will go as far as lifting entire portions of other studies if a tool is particularly effective or if our research hinges on the comparison of our data with other data sets (in a previous blog I talked about how another student had kept questions in a survey which he did not like because the survey had been done previously with those questions, and getting rid of them might skew his data comparison). Sometimes it becomes necessary for us to iterate on the research tools of others however. This is the way a group of researchers felt as they were attempting to learn how participation in the arts affects people’s perceived quality of life.

The 2008 study was hoping to fill out the literature on a subject on which there was very little literature, but it did not exist in a vacuum. Other studies had been conducted which these researchers had read and cited, but there were fallacies within this work that the researchers had observed. The greatest of these was that “the arts” had been lumped together as one subject in previous studies. This led to what some believed may have been faulty data. For the current study the researchers decided to take a different approach, splitting the survey into a portion which aimed to discover the participants quality of life in one portion, and their varying degrees of participation in 65 different areas of the arts in another portion of the survey. Perhaps this was not the best way to do things however.

In the new study, the conclusions drawn from the data still seemed to point out that there seems to be little correspondence between participation in the arts and satisfaction in quality of life. While this is what the data seems to say, the researchers still felt that this data was flawed, and they acknowledged that perhaps their research was flawed as well. This is not necessarily a bad thing however.

As I stated previously, the process of research always holds the risk of failure. Even failed research can be useful however. A quote regularly attributed to Thomas Edison states “I have not failed 1,000 times.  I have successfully discovered 1,000 ways to NOT make a light bulb.” Even if by process of elimination, every failed attempt at collecting information informs you and other researchers of potential risks in the process, giving you the ability to plan for them in the future. Remember this as you are performing your own research, and be certain to admit your flaws as they become apparent to you. In this way, even a flawed project can contribute greatly to academia.

The Difficulties of Survey-Conducting

This past weekend I learned the hard way that research is not all sunshine and roses. Not that everything up to this point has been easy, but Saturday night was when I personally came into contact with one of the most difficult elements of research; The human element.

I like to believe that people are inherently good and empathetic, but when you attach yourself to an organization you suddenly lose all benefits of that empathy. You are now a corporate extension, and no matter how charming and charismatic you may be, you can now be ignored, especially if you are asking for something from them.

Perhaps I’m making this out to be a bit worse than it really is.

Let’s go back to the beginning. This weekend I began distributed and collected surveys at the Exploration Station’s Sleepy Hollow Village. For those unfamiliar with the event, think of it like a small fair, centered around a reenactment of Sleepy Hollow. Saturday Night I was stationed next to the gift shop area and told that I could give surveys in that vicinity, and let loose. Something I realized very quickly though is that when your primary attendance is parents with their children, most of your potential survey takers are going to have much more important things to be concerned about – namely their young ones who are all blasting around at the speed of sound.

This was only one of the many things that made it difficult for me to hand out surveys on Saturday night, but I think it is important that we talk about those difficulties discuss ways to cope with them, or even to overcome them. All of us in this class are probably first-timers when it comes to this, so pooling our information would be invaluable to each of us. That being said, here are some of my takeaways.

First: While you may be conducting the survey for a client, always remember that you are there by the client’s grace and it is important that you respect your wishes. This means Using the space you are given to its fullest extent, and only moving if you discuss it with your client. You may think that you’ll get better results if you move somewhere else, but your client put you where they put you for a reason Whether that reason was because they wanted you there or they didn’t want you elsewhere does not matter – both are equally important.

Second: Put your Disney face on. Nothing turns people off more than someone who doesn’t look like they want to be there. Moreover, if you aren’t making an effort to keep that face on all the time, people will be arguably just as put off by someone who looks like they’re just putting on a fake face every time someone walks by.

Third: This one’s a bit subjective for me, but have an opening line prepared. Don’t just walk up to people and say, “Hey, wanna take a survey?” Appeal to something within them. My strategy was to ask people if they would support the arts by taking a community survey. Now the subject is supporting the arts, and getting survey results is just a preposition.

What are your thoughts on the subject? Any useful tips for giving surveys, or things you think should be avoided?

Data Analysis – Quantity or Quality?

In the world of metrics and data analysis there has a been a long-asked question that many struggle to find a proper answer to. That question; Quantity, or quality? Should we be trying to get more detail out of each response, or should we format our research to try to reach as many people as possible and collect as many responses as we can. According to Google, the answer may lean closer to the quality side of the scale.

Janneke van Geuns, the head of insights and analytics at Google recently spoke with PR News online (The interview can be found here) and spoke briefly on data analysis, and specifically what kind of data companies want to be looking for if they want the most effective information.

According to Ms. van Geuns, probably the biggest misconception companies will make is that they try to collect all of the data that they possibly can, thinking that that will somehow give them an answer if they look at the data they have collected hard enough. The fact of the matter is that useful data is relevant data, and there can’t possibly be any relevance if you don’t approach data collection with a research question in mind. Van Geuns compares this approach to data collection and gleaning insight like trying to find a needle in a haystack.

In this spirit, van Geuns told PR News that the most important metric that you can be looking for is the one that corresponds to the results that your company cares about. It is important then if we are going to be creating a survey based on those results, we need to do our homework ahead of time and answer the big preliminary question: what metrics coordinate with the results we want to influence?

This is important information to any public relations professional. It is tempting at times to just want to throw a bunch of questions together and toss them out into the open. In practice however, we need to make sure that we are asking twenty questions to ourselves about every single question that we want to ask our public before we build a survey. If we don’t approach data collection with an idea of what we are looking for, we may as well be throwing darts blindfolded. Good results are built with good data, and good data is collected through good research.

Public Opinion in the wake of Tragedy

I would like to preface this blog by first saying that I in no way wish to make light of the events which occurred this last week in Las Vegas. The shooting was not only horrific but is still very fresh in our minds. As such I will not cover the details of the event itself or elaborate on them in this post. Rather, I will look at the reactions of people to what has happened. The public opinions on this issue have not only gone through a process of growth through this past week, but are also very polarized. After all, sitting at the core of all of this is a political issue that both sides of the aisle have historically fought long and hard over; gun control.

While the debates and thoughts over gun control have been flying around haphazardly for the past few days, things did not initially begin this way. While some were already weighing in their thoughts on the shooting, many others were calling for silence, if not out of respect then at least out of patience. During the first two to three days after the shooting had occurred, you probably recall seeing many posts on social media saying that people should wait before bringing up issues, some saying that we should grieve first, others saying that we should wait to hear more information. In contrast to this, you might recall seeing others who believed that grief should not delay our willingness to talk about the issues, but should instead raise the urgency to speak about them. From what I could see, people on both sides of the political spectrum were rallying to either side of this argument, leading to two separate opportunities for people to take sides on one particular issue, leading to a brief period of time where the public opinion itself was split four ways. As time went on however, people who had been more reticent began to join one side or the other of the larger of the two positions, leading to the veritable mess of opinions on gun control that we see right now.

Currently the public opinion on the gun control issue seems more divided than ever, with people on both sides appearing to pull farther into their corners in response to the tragedy in Las Vegas. Most interestingly there have been some instances of people from both ends of the political spectrum attempting to attribute reasons to why the shooting occurred. While some believe such attribution to be politically motivated, I think it is more reasonable to infer that it is more likely that what we are experiencing is a country of people trying to cope with a horrible crime that has so far had no motive attached to it. By most accounts about the shooter from many investigations he should not be the kind of person who would do what he did, and that is what I think is currently driving people so effectively to either side of the issue.

We must remember that the issue of gun control has always been a touchy one, not just because of the weight and responsibility attached to the right to own arms, but also because it is an issue that the American people have been divided over almost 50-50 for a great deal of time (with those halves not being necessarily exclusive to any political party). However, while it may seem that the public opinion on gun control only goes two ways, I’m not entirely convinced.

I find myself wondering if this is a true representation of the thoughts that the American people have, or maybe if there is some middle ground occupied between those two rigid stances. I have observed that the farther left or right you lean on an issue, typically the more passionate you are about those issues. If you agree with that generalization, then it makes sense that the loudest voices would come from those two arguments.

While I was unable to find any widely unified opinions lying in this “in-between” zone on the issue, I am still not convinced that they don’t exist. I believe that those opinions are important, especially on an issue as divided as this one. Perhaps it is a bit idealistic of me to think this, but I feel that a center leaning front on this debate could serve as a missing link that could help us make more sense of what the people at large are thinking.

That brings me to my question for today. Why miht we look for these “silent demographics?” What kinds of thoughts or information might we be looking for that could justify the efforts to bring those kinds of people into the fold of an issue?