Categories
Climate and Emergencies

BCMA Climate Action Podcast Series EP05: Online Misinformation and Disinformation with Yimin Chen

In this fifth episode of our ongoing Climate Leadership & Action for Museums podcast series, we speak with Yimin Chen, a PhD candidate in Library and Information Science in the Faculty of Information and Media Studies at the University of Western Ontario.

His research examines the communicative practices of online communities and cultures, with a focus on Internet trolling behaviours and the controversy surrounding them. His previous projects range from fake news and deception detection, to library automation, to the impact of political memes on social media.

In a world choked with fake news and online misinformation, museums have the potential to leverage the publics’ trust to help communities discern reality from fiction when exploring topics like climate change. In this conversation, we explore the history of online misinformation, why some people choose to act like trolls, and why some online accounts might not even be people at all.

A brief word of warning – this podcast contains explicit language in the context of internet culture and online movements. 

Interview Transcript

The BCMA team is experimenting with transcribing podcast interviews to make the content more accessible to a broader audience. Below is a lightly-edited transcript from Ryan Hunt’s conversation with Yimin Chen. Please let us know what you think about transcribing podcast interviews by emailing bcma@museum.bc.ca.

Ryan: [00:00:00] Thank you so much for joining me this morning to get started. Would you mind introducing yourself?

Yimin: Yeah, of course. So my name is Yimin Chen. I’m a PhD candidate in the faculty of information and media studies at the University of Western Ontario.

My research interests are in online culture and communication with a specific focus on internet trolling cultures and in fake news and disinformation.

Ryan: I think for a lot of people, myself included. I was unfamiliar with the concept of fake news until let’s say 2016-ish, when the US presidential election went the way it did.

Can you unpack that term? Because I think often. When it’s used in popular culture, it’s used almost as fake news itself in some contexts. It’s like an excuse. So can you unpack what is fake news?

Yimin: Yeah, definitely. It wasn’t just fake news. A lot of online [00:01:00] behaviours and phenomena, and this sort of information-related practices suddenly became much more interesting to the public, somewhere around 2016 or so for reasons. But, I think you’re right about how fake news was introduced into the sort of public consciousness around that time and how its use has perhaps diverged from what you might expect.

I think you’re right. A lot of this does trace back to how then-presidential candidate and eventually President Donald Trump would use this term basically. For him, it seemed like fake news was anything or anyone who disagreed with whatever he was saying at that present time.

And, I think a lot of basically complicated cultural reasons this has been the sort of sense of the term that has [00:02:00] picked up steam. First of course, among his supporters and his fan base. And later on, in, in more recent years, across the online public discourse and, in-person public discourse more broadly.

Whereas, in the past, we might think of fake news as deliberate hoaxes about the old National Inquirer and supermarket tabloids reporting on alien visitations and how Elvis wasn’t actually dead. But that’s I guess you could probably broadly say the more harmless type of fake news where everyone knows it’s just a complete sham and the people writing it know that and the people reading it know that and it’s fun and jokey.

Fake news has become in a way, I would say weaponized in that It’s used as like a knee jerk accusation to attack someone’s credibility to attack someone’s honesty in discussions or, more often arguments where the person claiming fake news [00:03:00] often just doesn’t like what they’re hearing.

So instead of coming up with sort of an actual fact counter-argument or contrary logical reasoning to refute some points. They just try to deny the legitimacy of the other person’s opinion or perspective by claiming it is just completely false.

Ryan: So if I’m understanding you correctly, you’re saying that depending on the context and the person fake news can either be “real” as in news that is fabricated or it can be “fake,” as in real news that is being accused of being fake to de-legitimize it?

Yimin: Yeah, I think that’s pretty much the two sides of it. And honestly, from what I’ve seen, a lot of the journalistic world in a lot of the academic world, talking about these sorts of things in the last couple of years, tried to move away from this term because it’s [00:04:00] in a way become so poisoned and so effectively meaningless unless you’re specifically studying the phenomenon of people, claiming fake news more precisely, we may talk about, misinformation, disinformation, and basically bullshit, which is an academic term, actually.

Ryan: And I feel like from an outside perspective, clearly not someone who’s devoted as much attention to the topic as you, it feels almost intentionally opaque in a lot of ways. Why is it created and what its goals even are? Do you feel that opaqueness is a core part of the concept of online misinformation, disinformation and fake news?

Yimin: Yeah, I think in a deeper sort of broader way, a lot of this comes from online culture. I don’t personally see a online social media and a lot of online social spaces [00:05:00] as particularly productive venues for constructive discourse, discussion,  and general enlightenment.

Especially not the big public spaces like Twitter. Like earlier on in the internet sort of life, there was a lot of techno-utopian ideals about. Online spaces would represent the new, like sort of coffee salons from the Enlightenment where both the intelligentsia and the masses would be able to hobnob and meet together and talk about ideas and engage in public discussions.

Interesting things about important things and so on and so forth. And, maybe in some spaces that ideal still lives on and still exists. But I think more, more often than not partly because of the fact that, online, you can’t see who you’re talking to. The communication is often delayed and asynchronous.

I would, maybe tweet something and then sometime later [00:06:00] someone else tweets back at me and so on. There’s a level of sort of disinhibition there’s this concept called the online disinhibition effect where because of these sorts of law, the lack of the sort of social cues and the kind of immediate feedback we would otherwise get.

If we were speaking face to face or someone people on the internet tend to act out in more extreme ways than they would in person and in many ways, exhibit signed to kind of sociopathic tendencies. I think it’s become worse to speak to other people in order to understand their opinions and their perspectives and where they’re coming from and so on and so forth.

We speak to other people in order to win arguments and demonstrate that they are wrong and stupid and that we are correct and intelligent the sort of alt-right and [00:07:00] much of the right-wing online very humorously describe this as just, quote-unquote, “owning the libs.”

And I think that’s a pretty apt description. They don’t engage in conversations online in order to explain any of their ideas in order to really try to show how they’re right in other people are wrong. They simply attack people. And in a way, it’s almost like a drive-by-shooting sort of thing.

They will just throw out, fake news basically, or baseless, accusations inaccurate facts that are just really cherry-picked and skewed misrepresentations of data and history, and then just, run off laughing that they have caused someone perhaps to feel uncomfortable for a while.

I think this is becoming more prevalent, but I don’t think it’s particularly new to online environments, that fake news is [00:08:00] a thing.

Ryan: I myself have worked in communications and in the government and cultural sector for a number of years. And I think that broadly speaking, a lot of people approach social media, thinking that they are engaging in good faith. We’re connecting as people. And based on what you were just saying, then do you think that default mindset should be shifted to something that is when you are engaging with anyone on social media that you should be operating from maybe more of a, and this may be cynical, but like a place of suspicion?

Yimin: No, absolutely. I would hope that even for myself I’m not so cynical that I would, as a default, automatically assume everyone I talked to online is trying to mess with me or to argue in bad faith. I do think it is important that we do approach our [00:09:00] interactions online openly and honestly, at least at first.

If we go into every conversation online guns blazing on the offence then really, are we any better than the online trolls, for example, or the bullies and so on? But I do think it is becoming increasingly important to at the very least be aware that some of these things are happening, that there is a significant contingent of people online who do not as a rule act in good faith or speak in good faith and much of what motivates them is simply to harass and annoy people often for nothing other than their own entertainment. So I do think we should be, we should still continue to be kind whenever possible, but also to be very careful and aware of potential dangers in the online environment and in talking to people [00:10:00] on these open public platforms.

Ryan: When I talk to people who the internet does not occupy as large a percentage of their brain as maybe mine or yours or other people’s about the concept of online misinformation or disinformation. I think one of the first questions I often get is why to what end are people doing this? What’s in it for them? Those kinds of questions to try to make sense of the seeming chaos. What, in your experience, are some of the leading reasons why we see these patterns of behaviour?

Yimin: There was a game I played when I was a kid. And I’m wondering if you’re familiar with it? When you give people high fives, someone will say up high, down low, and then when the other person goes to give you a low, clap on the hand, you jerk your hand away and you say too slow. And you laugh at the other person at their shock [00:11:00] at their frustration or confusion.

I feel like in many ways the reasons why people would engage in sort of fake news, discourses, misinformation, disinformation, and so on and so forth online. Pretty similar to that. In the minds of a lot of trolls, they’re basically playing a game much like kids would do on any school playground.

And in this sense, their game is to confuse you or make you believe something that is, patently ridiculous or to cause you emotional distress. And if they’ve caused you to react in any of these ways, then they win the game. And, sometimes they’ll move on to someone else or sometimes they’ll keep at you until something maybe even worse happens until you disengage or until they get bored or what have you.[00:12:00]

But, I think for a lot of people, it just boils down to partly boredom perhaps, but a sort of malicious, joy at inflicting often, some minor level of pain on other people. But in doing so, they get a kind of a rush it’s asserting dominance in a way over someone else, essentially winning in a game or sort of contest.

As a naive, online user, you may not even know you were a part of it. So frankly, it just boils down to that some people are jerks and whether online or offline jerks will be jerks.

Ryan: Your wording is interesting, some “people.” There is also a significant contingent of social media accounts that were created by people, but are not day-to-day people in that they are bots or algorithmically generated. I was doing some pre-research for this [00:13:00] conversation and, there are some numbers around estimates around nine to 15% of all Twitter accounts in existence are bots. I think on a human level, I can understand the the group you were just talking about that, gets some kind of sick fun out of harassing people, bullying, behaviours, very common. Why then create bots? What, why do they exist? What purpose are they serving?

Yimin: If it’s fun to annoy one person, imagine how much fun it would be to annoy a hundred or a thousand people at the same time? Or, imagine how fun it would be to automate the process? And every once in a while, if you so chose to go back and skim the greatest hits and so on.

So for some of these people, the motivation to create bots is simply to extend the reach of their trolling exploits and so on. But for a lot of other ones, the motivation is in a way ideological and propaganda [00:14:00] type. So in addition to just, being jerks online a lot of people also perhaps in a way as a sort of throwback to that whole public sphere kind of idea, they do use the online environment to engage in political discourse. It’s one of the most common things that people talk about. Say the comments sections online or on people’s Facebook feeds, I’m sure you have perhaps seen that these conversations can quickly devolve into basically just mutual abuse, harassment name-calling – shouting in fake news essentially.

I believe a recent sort of slogan was “Let’s Go Brandon” which in a lot of the right-wing discourse in the United States is used to basically “mean fuck you, Joe Biden.” I don’t know if you have to bleep that, but the point, there is sometimes [00:15:00] people in that mindset may use bots simply to broadcast some of these messages, some simple slogans like Donald Trump’s “stop the steal” or even just writing a bot to post like fake news to the comments of CNN or, whatever other news organization that you might happen to personally disagree with.

A lot of these sort of more simple bots are fairly easy to detect and to remove by platforms like Twitter and so on. But we have seen increasingly bots that behave in more humanlike ways and if you are just even slightly more sophisticated with the types of political messaging, for example, or other sorts of comments that you program into the bot, then, you could in effect have for very low investment, hundreds, thousands, tens of thousands, essentially like political canvassers, [00:16:00] moving all across the internet spreading whatever ideological message that you want to put out there, or as a sort of kind of attack ad or something like that, criticisms for against the people that you don’t like.

This is not exactly a very new strategy It’s basically astroturfing that is it’s a sort of fake or phony grassroots style movement. It’s designed to give the impression that a lot of normal everyday people all you know, independently have this idea where, in fact, it’s just maybe a small group of people and who knows how many automated algorithms or bot profiles trying to give the illusion that, there’s broad mainstream support or appeal in this one message that you’re sending.

So like bots can be very useful for all sorts of. [00:17:00] of things. Not all of them are necessarily malicious, but most of the uses of internet bots that make the news or that we might be most aware of are likely to be, not in good faith.

Ryan: And then for context, for listeners who might not be familiar with the term “astroturfing,” my understanding is it’s like a mirror version of the idea of something being grassroots, that there is a casual network of support coming up from the ground from a community in support of something. Whereas astroturfing is its twisted mirror version of manufacturing, the illusion of that ground-up support through either paying operatives, having bots manufacture outrage by making people angry and reactive. Is that a decent definition of astroturfing?

Yimin: Yeah, pretty much we can think of ideas or messaging coming for [00:18:00] as like top-down or bottom-up. So if it comes from top-down, it’s the sort of say leaders in a political party who sort of themselves come up with ideas or platforms or policy, and then basically tell everyone below them what to do. For grassroots though, that’s supposed to describe a bottom-up process where the community, the broader public, you know, mass individual supporters to say a political party help to determine policy, to determine the direction that they want their leaders to go into and so on.

And in, in most democratic societies, the idea is this sort of bottom-up grassroots process is more democratic and, therefore more legitimate And so there’s a lot of appeal in being able to appear that you are a grassroots movement. As you said, astroturfing is basically an underhanded way of [00:19:00] appearing to be a grassroots movement when you are in fact essentially completely made-up instead of having individual people, voice their support for say an idea or a policy platform, you can replace those individual people with, an automated bot that posts the same message or a small collection of the same sorts of messages. And in this way, it’s a way, to make a kind of top-down or sometimes even like sideways in kind of a strategic approach appears that it’s bottom-up. So you might only be a small handful of people with perhaps very controversial, radical maybe extremist views. What appears to be, thousands and thousands of people saying the same thing, appearing to agree with you, then, you give yourself and your group, a the sort of veneer of legitimacy you give, you get the appearance of having broad support [00:20:00] when really it’s, a handful of people and perhaps a million.

So that’s, I think one of the great dangers of the ability to use bots in these sorts of ad astroturfing campaigns.

Ryan: So we started this conversation talking about the 2016 election. And I think for me in the context of a US presidential election. It makes sense why forces might try to sway things one way or another because the US president has a tremendous amount of power and influence on the global stage. Where things start to break down for me a little bit is that you see these tactics and techniques then filtering down into smaller and smaller issues and topics and organizations. And now, it is not unusual for say the BCMA when we post things on social media to get responses from accounts that exhibit [00:21:00] very bot-like behaviour. And as much as I, I think, highly of our influence, the BCMA is not the president of the United States. Now when we post something on Facebook about climate change, or sustainability, or decolonization we have one account that has no followers, no photos, no nothing comment something along the lines of “I don’t think communism is the answer.” And it’s almost like clockwork. We can almost count on that response coming when we post on certain topics.

Yimin: Beyond what we had said earlier about some people just being plain jerks. So there was a fairly influential communication theorist from the 20th century by the name of James Carey. No relation to the actor as far as I understand, but one of James Carey’s quite influential ideas was that he described sort of two potential ways of looking at communication. And he [00:22:00] called these two things, the transmission view and the virtual view basically, in short, the transmission view is perhaps more aligned with what we would normally think about the sort of purposes of communication in day-to-day life that is to transmit information, to send useful information to another person because they should know about this.

So you can think of this as getting a bill, from the electric. They’re giving you information that this is how much you owe them and they’re doing so in order to get a response from you, you paying the money. Or, if I were to ask my friend, where would you like to have lunch today? And they say let’s get shawarma or something like that. The purpose of that interaction was to transmit my opinion, to solicit his opinion, and to reach some sort of conclusion [00:23:00] at the end, that the purpose of these communication events is to send and receive information and then to act upon it.

The other way of looking at it, James Kerry would have thought was the ritual view. Under this view, and it’s not, entirely mutually exclusive with the transmission view, that is, communication is very often only part transmission. Many times when we communicate with each other, it’s not necessarily to give you a piece of information that I think you need to know. It is simply a way of maintaining society and culture, essentially, to let each other know who we are where we fit in our society. To help the kind of day-to-day workings of a civilization, essentially. So under this view, you could think of things like small talk, if I say, “oh, hey, how are [00:24:00] you Interesting weather we’ve been having lately, right?” More than likely don’t actually care, particularly that much, how you are or how the weather is. This is simply communication designed to socialize, or more specifically how I think this applies sort of conversation is that one of the sorts of purposes of ritual communication is to signal your identity. And in many of these cases, we can see these as signals of the political affiliation of our say race, gender, or class affiliations. And essentially to be able to tell people who we are, what we represent and the things we believe in and subscribe to. So I think especially online where again, without those personal social cues, without physical cues, without being able [00:25:00] to see who it is we’re talking to, we are often limited to expressing our identity, who we are through these sorts of identity signals.

And that is a very ritual way of community. For example, if I want to communicate that I support Donald Trump, I might have, the letters MAGA a somewhere in my account profile, or it might be something I throw into my messages and so on. Basically, this whole thing about “Let’s Go Brandon” doesn’t communicate anything. It is almost entirely ritual. It’s simply saying, I oppose the Biden presidency and, generally by extension I support Donald Trump. So where I am going with this is if you subscribe to some of these worldviews, especially the MAGA “Make America Great Again,” Fuck Joe Biden, “let’s go Donald Trump” worldviews then to be part of that [00:26:00] you also, very often have to subscribe to things like climate change isn’t real, things like talking about racism and colonialism is the real racism, it’s what causes division in our societies. And if we don’t talk about it, then it doesn’t exist and we can go along merrily with our eyes and ears closed, pretending nothing ever was and is wrong. And if, again someone subscribes to all of these things it can be a motivator for them to then go on and basically attack any instances they see of people or organizations or communities who express opposing viewpoints.

So this is why some random person might go onto the BC Museum Association’s Facebook page or a Twitter account or so on, [00:27:00] and, just accuse you all of being communists or whatever because it is a way for them to basically partly to affirm to themselves their own identity, that who they are as a person is standing against your wrongs that they perceive you doing. And also as a sort of flag to other people who think similarly that this is something we should be against. And, if you don’t act quickly enough, sometimes you might see that you might get one or two, like a trickle of these sorts of comments earlier on, but eventually if they stay up, then they might get the attention of yet more people. And you might get yet more comments just attacking your positions, attacking your initiatives and, calling you anything from a socialist, to a commie, to a pinko or whatever. And I guess to come back to the original point, what they [00:28:00] get out of it. I think a lot of them again are just jerks, first of all, but also in a more complicated way.

We can consider human beings kind of attention-seeking animals, we are of course, quite social. We live in complicated cultures and societies and one of the ways we make sense of the world in and of ourselves is to believe that we have an identity that we stand for certain things, we represent certain things, we are certain things and it’s this identity that governance, how we act. But also we often choose to express our identities because it’s a way of saying, “Hey, I’m here, and maybe I’d like to talk to other people or I’d like to be with like-minded people with my community,” and on the internet, sometimes that manifests as them attacking a small-stakes, provincial museum [00:29:00] association, social media page because they happen to see you as representing something that is, not only not part of their identity, it’s antithetical to their identity. They see you as perhaps a threat to the stability of their identity. And therefore you must be attacked or wiped out or at the very least annoyed, a little bit.

Ryan: A quote that I often think of in the context of online misinformation that I’d love your response on is in an interview in 2018 with Steve Bannon, former Trump campaign manager, and if I may editorialize a total monster, was speaking about what challenges his kind of right-wing populous movement face, he said the real opposition is the media. And the way to deal with them is “to flood the zone with shit.” Meaning [00:30:00] that to defend against legitimate critique is to just basically spam nonsense, to blur the line between real and fake or legitimate outrage and manufactured outrage. How much of a factor do you see as that mindset being in a lot of online misinformation?

Yimin: Yeah, if we go back full circle to fake news, this is basically the entire idea behind fake news to delegitimize basically everything. So that’s like authoritative reliable and accurate information. Just to bring those dependable sources of information and facts themselves down to the level of bullshit where, you know, where anything goes.

So if everything is bullshit, then you know, nothing is legitimate. And in many [00:31:00] ways, this sort of works because no individual person is going to be a knowledgeable expert about everything, we might all have our specific fields of expertise. We might know a lot of trivia about certain things or whatever, but it’s impossible for us to know, everything about everything.

So there’s always going to be certain information, certain messages and comments and statements that you see where you were in order to determine whether or not that is a sort of accurate, legitimate statement. You would have to do some fact-checking. You’d have to look it up. You would have to do some research to see if this is a verified evidence-based statement or not.

And of course, no one has the time to be able to fact check, everything they see every day. And basically, this is how this disinformation bullshit campaign works by, flooding [00:32:00] the airwaves, the internet tubes, whatever the public discourse with absolute bollocks they force a situation where either you have to fact check, almost everything that you see, or at some point you just give up. And in situations where people can be vulnerable to completely baseless claims or conspiracy theories or just bullshit ideas because they see it, don’t have the expertise to, automatically accept or reject it and don’t have the time or the willingness to go and check if that’s, absolutely true or not.

This is the sort of gray zone that allows basically, bullshit, conspiracy theories, bad actors to thrive. And I’m reminded of another quote, I believe it was I forget his name, but I think it was a computer scientist and he posted to Twitter some years back about what he [00:33:00] called the “bullshit asymmetry principle.” And basically, this states that the amount of time and effort it takes to refute bullshit is an order of magnitude greater than the time it takes to create bullshit. So essentially if we are living in an environment where bullshit is the norm where bullshit thrives.

It takes so much more effort to clean it up than it would it to prevent that from happening in the first place. And of course, trying to prevent bullshit from being thrown around is its own sort of complicated and really controversial set of arguments. But the point is bullshit is really easy to create and extremely difficult to clean up.

And we, I think are living in a time [00:34:00] where there’s just a profound amount of bullshit and partly it’s because like you like your reference to Bannon says this is intentional if you are pedalling, really bad ideas, then it is to your benefit to bring down the level of discourse, to delegitimized good ideas so that people can’t tell the difference between good ideas and bad ideas.

And if you are louder then maybe your bad ideas will get heard and maybe they will get a foothold and gain relevance. And I would say it’s a good strategy if you’re being just entirely cynical about the whole thing because I think in many ways this flood of shit has worked.

But I don’t know. Maybe there’s hope

Ryan: I know we’re running short on the time we have together for this conversation. I wondered. Maybe this is idealistic of me [00:35:00] that if we look at rates of public trust people, broadly trust museums, basically more than any other institution in society, more, certainly more than government, more than schools, more than media companies. It can be north of 80% of the average citizen believes that museums are trusted sources of information. Do you see a role for museums in helping to navigate some of these waters or do you think it’s one of those?

I can’t remember the exact saying, but if you wrestle a pig, you just get muddy and the pig has fun and all you get is dirty. Is trying to wade into this beyond any one institution’s scope to create change?

Yimin: Yeah. Maybe this is a cop-out, but I think that last point is absolutely right. This sort of thing is much bigger than any one institution or any one country or whatever. Really, [00:36:00] this is a widespread global phenomenon this whole dragging facts, information legitimate, honest discourse down into the mud. This is happening all over the place and it is a problem that is absolutely bigger than any one institution or organization.

But, that said, I think museums do, do good work and very important work. And I think that, like you said, based on the sort of cultural capital that museums and other sorts of cultural and historical organizations have you have the position and the responsibility, I think really to act sometimes, in an activist role, to be able to you, you have a platform, you have the ability you have at least some resources and some [00:37:00] people to be able to push for justice to push for a more equitable or a more kind, sort of society. And part of that comes via a reckoning with the history of our societies and of our civilizations and so on. So I think, it’s very important that museums continue doing the work that, you’ve all been doing in terms of decolonization, in terms of, pointing out the reality of anthropogenic climate change and all these other sorts of very timely and important initiatives. So I don’t think, it’s your responsibility to solve the fake news crisis, but I do think we all should try to do our part in, I don’t know if this is preachy whatever, creating the world [00:38:00] that we all want to live in for the future.

So if this means, wrestling the pig, that is disinformation, down in the mud, then perhaps that’s something that we all have to do. But that said, we should go into it with eyes open, knowing what we’re getting into. If you’re going to wrestle a pig in mud, then you know, I would prefer to do it with, I don’t know, some goggles on or old clothes that I don’t mind getting dirty or I, this analogy is totally broken down, but basically, if I’m going to wrestle a pig in mud, I would like to know what I’m getting into beforehand.

And I may not win, but I think I, my chances are better if I know I’m going to wrestle a pig, than if I just go into it completely blind.

Ryan: Thank you so much for your time. Helping us unpack what is by its very nature, a very confusing, [00:39:00] scary, overwhelming topic. And thank you also for giving us enough audio that we can now make a vocal deep fake of you and record, six more of these podcasts.

Yimin: Oh, absolutely. If you’re talking about bots like that’s the end of humans as we know it, right? If we can automate and just machine learn everything, what use are we? That’s a cheery topic for perhaps another time.

Ryan: Thank you so much. If people want to learn more about your research or your work, is there any place that people can visit?

Yimin: I do have a Twitter account that I basically don’t use, as you may possibly infer given my comments about Twitter. My dissertation supervisor, Dr. Victoria Rubin does keep her own website at Western where she keeps the information about the projects that we’ve all worked on. I’ll send that over. That link over and you can find some of the past our past projects on this website and [00:40:00] there’s contact information there. If you want to get in touch with any of us in the future. Great.

Ryan: Thank you so much. So I’ll put those in the show notes for this podcast and thank you again so much for your time this morning.

Yimin: Oh, no worries. This is fun. Thanks for having me.

The BC Museums Association gratefully acknowledges funding support of this project from the Government of Canada.

Links: