Technology and Security (TS) explores the intersections of emerging technologies and security. It is hosted by Dr Miah Hammond-Errey. Each month, experts in technology and security join Miah to discuss pressing issues, policy debates, international developments, and share leadership and career advice. https://miahhe.com/about-ts | https://stratfutures.com
Dr Miah Hammond-Errey (00:00)
My guest today is Ganna Pogrebna. Thanks for joining me, Ganna.
Ganna Pogrebna (00:04)
Thank you Miah, thanks for having me. Thanks.
Dr Miah Hammond-Errey (00:07)
Ganna Pogrebna is a professor of behavioural data science and author, speaker and executive leader of major research teams. She's currently affiliated with Queen's University, Belfast, the University of Sydney and is a non-resident fellow at the Australian Strategic Policy Institute or ASPE. She also leads research on behavioural data science at the Alan Turing Institute. She was previously the executive director of the AI and Cyber Futures Institute, which won Australia's Cybersecurity Research Institution of the Year in 2025. She has secured over $30 million in competitive research funding and published over 100 peer review publications. This includes recent books on bias, cyber risks, and AI ethics. She's won numerous high profile individual awards and is on several editorial boards.
We are coming to you today from the lands of the Gadigal people. I pay my respects to elders past, present and emerging and acknowledge their continuing connection to land, sea and community.
Ganna Pogrebna (01:03)
Yeah, I want to say hello to all the First Nations people who are listening to us and acknowledge elders past and present.
Dr Miah Hammond-Errey (01:12)
I know inclusion in technology is really important to you. What do you think the most pressing issues for inclusion in the technology space are at the moment to call what you've referred to as a social uplift rather than driving inequality.
Ganna Pogrebna (01:26)
Yeah, think where I see, I think we're pretty much, I'm not gonna say that we solved the problem with women in technology, I think it's still a big problem. But, you know, talking about generally about underrepresented groups, think one of the most pressing problems is the divide between the urban and rural areas. Lot of data comes from rural regional areas and so third world as you know, the majority of trainers of machine learning models are in the global south, right? But ⁓ people who kind of reap the benefits or like enjoying the benefits of technology are all in cities. And so, yeah, I think that's kind of a big, big problem.
Dr Miah Hammond-Errey (02:07)
Do you think there are any obvious things we can do to improve that?
Ganna Pogrebna (02:12)
Yes, mean, my personal attitude towards this is kind of small step policy. So when I'm speaking at the conference, I just like I have like a rider. The rider is they have to include a person from either underprivileged background or, you know, sufficient number of women.
So I demand that. if they say, the usual thing that people say, we can't find anybody. And they normally say, if you can't find anybody, I will find somebody that I know that can participate. So yeah, I think if every one of us have this kind of inclusion rider when we are joining a company, bringing in another female lead maybe from a minority background or.
another person from underrepresented group, know, someone from LGBTQ community or something like that. Yeah, so that's kind of that all helps. So, yeah, that's what I do anyway. ⁓
Dr Miah Hammond-Errey (03:12)
It's a great suggestion.
You know, the technology and security podcast obviously explores the nexus between technology and security. And what do think the most important insights that behavioural data science can offer this audience?
Ganna Pogrebna (03:28)
Yeah, think that's, it's sort of in the majority of cases when we talk about technology, it's mostly about, you know, someone from engineering, like just say software engineering, designing something for people. And the lack of understanding of how people are kind of behaving in different scenarios really shows. So that's why sometimes we have
really nice tools, but they're not adopted by the stakeholders. yeah, so that's how behavioural science is contributing. Basically, it brings the understanding of behaviour and we can create, let's just say machine learning models that incorporate decision theory or different types of things. Just to kind of give you an example,
I'm sure everyone has a smartphone. So the original iPhone, you don't know, they had basically a network wiring going around the edge of the phone. But when you're calling someone, you're doing this, right? So all of a sudden you wouldn't have network. And this is kind of an example how the engineering thought never was thinking in terms of what people actually do.
⁓ And this is how the behavioural component is contributing to the design of things. Similarly, if you're building a bridge, which I was involved in a project where we built a bridge, if you just assume that people walk on the bridge, it will be really unstable if people run in different directions. So again, you need the behavioural...
scientists or behavioural data scientists to create a model of how this kind of, you know, random motion is going to work if that happens. Yeah, so, so yeah, like lots of applications is around the use and co-design with stakeholders.
Dr Miah Hammond-Errey (05:18)
have a feeling we're going to come back to human and technology interactions throughout this discussion.
What do you think the biggest issues for cybersecurity are in the next two to five years in an Australian?
Ganna Pogrebna (05:28)
Yeah, I think well one of them I kind of read it a little bit touchstone which is the the divide between rural and urban areas because in in rural areas I feel that people are particularly vulnerable because well as you know there is Yeah, there are problems with networks, you know people would
basically go join a free Wi-Fi without quite understanding how insecure it could be. Another thing is, and particularly this problem is very important in Australia, is the problem of QR codes, right? We can weaponize QR codes really easily, but in this country, we're particularly encouraged to scan QR codes everywhere.
So that's kind of at the individual level. At a small business level, I think it's a third party risks. So I spent a lot of time with SMEs and usually the founders tell me, but we're using this tool or that tool. And you have often no idea how the model was trained. You know, what are the potential risks of using that model? And you have no idea. Yeah.
basically how secure it is. You have some due diligence that you are checking on, but if you're using off the shelf tools, it's really problematic.
Dr Miah Hammond-Errey (06:53)
And they're often considered the industry best practice tool. So you're often kind of forced into using those,
Ganna Pogrebna (06:58)
Yeah,
yeah, that's it. And look, I get it because you're as a founder. So I was like in a situation where I had to build a team on a cheap, like it's on a very small budget. So I understand that you don't have the funds and you don't have necessarily time to train people to kind of understand all these risks. And, you know, yeah, like you're kind of forced.
to adopt these different tools that are just out there. But there are no kind of legal requirements in ⁓ Australia where we need to understand where the models are coming from, how they were trained. And also imagine that you are say an SME that does, for example, inclusion work, but you using an algorithm that exploited people for training, right? So they used maybe some...
people in Africa to label the data sets. that kind of, I'm not going to say that kind of kills the purpose of your enterprise, but it certainly creates some ethical issues in using these tools.
Dr Miah Hammond-Errey (08:07)
And they're really difficult ethical issues for individuals to solve. They're the kind of things that are systemic and that if you're an individual founder and you're trying to use the best tools, as you say, you have limited resources both at, know, as you say, setting regulations or having clear guidelines and having, you know, codes of conduct, it does make it a little bit easier because as an individual, it's very hard to shape that system.
Ganna Pogrebna (08:30)
if we stop paying for technology that is not properly developed.
the situation will change and we will pay attention to what we're using. But until that happens, I think it doesn't matter what the government will put out in that domain.
Dr Miah Hammond-Errey (08:45)
You've spent some time traveling and working between Australia, Europe and the US. What do you see as the most concerning or interesting technology and security trends?
Ganna Pogrebna (08:45)
you
I probably will start with generally the situation, in how Australia is different here from US and UK. First of all, I think that I really admire kind of Australian founders, so people like you who kind of do things in Australia because we sort of are a lot cheaper than people in...
in the UK and US. like any investor, I just want to say to investors and venture capitalists that on a dollar, you're getting so much more out of Australian workforce, founders or CTOs and all of these types of people. And it's because normally we do not charge for our time. Like we charged for assistance and other.
types of technology, but like, you know, you're getting a lot more mileage out of the investment. But unfortunately, Australian investors themselves are incredibly risk averse. So when we're looking at technology, we have lots of cool stuff being developed in Australia. Especially I see that in quantum space, quantum algorithms. I don't mean quantum computing, but like quantum.
quantum algorithms. So that's when we're using, for example, physics models in machine learning type of algorithms. So Australia is like one of the leading countries here, but most of the time, and I speak from the experience, I used to get an American investor or UK investor to commit to the project and then we would get some Australian investors. So it's like a very risk-averse market.
So like I think people kind of sort of deserve admiration. But in terms of kind of the general trends, think definitely this year we saw a lot more working agent, agentic AI and the development of autonomous agents in cyber security space. It's a very interesting area, very interesting how to deal with it because obviously you can only work in very... ⁓
constrained environments with these types of things, but very promising. Like I said, quantum synergies, quantum modeling synergies with machine learning for cyber security. It's very interesting. well, area where I worked in, I think is probably my personal bias is digital twinning. yeah, digital twins are definitely very hot.
Dr Miah Hammond-Errey (11:23)
Do you want to share a little bit for my audience about what digital twins are and some examples of how you've used them?
Ganna Pogrebna (11:29)
Yeah, sure. most of the time when we say digital twins, people imagine just digital replica of an object or a person or something like that. like, yeah, if you just have an exact replica of me, like a holographic, let's just say a replica of me here.
That is not a digital twin. We call it digital shadow. And the reason is it's because like the data flows only in one direction. like, you know, we scan, what Ganna looks like and make a digital representation of her. It becomes a digital twin when we actually model, right? What Ganna is going to do in different scenarios. So let's just say if we play the music, will she dance? So hopefully not. I don't think anyone wants to see that.
but you know, like, yeah, basically if we simulate what I will do in different scenarios, that would be a digital twin. yeah, like we can basically create a twin of the entire cybersecurity system of the company, for example, and run different types of simulations, so simulated experiments.
so that when we actually have to experiment, we can only run one or two experiments and not like 14, which significantly costs costs costs of experimentation. We also can probably work out the recommendations, for example, for chief information security officers or, you know, people in compliance departments of what to what decisions they could make. So basically it's a decision improvement tool that helps you simulate future scenarios that you don't necessarily have observations about. So it's very powerful.
One of the first digital twins was designed for space for Apollo 13 mission. if you like if the guys in the audience don't know what Apollo 13 is, it's basically a spaceship that at the take-off kind of got the oxygen tank that exploded. And what happened was there were a bunch of people on the orbit basically stranded. And what NASA did, they ran lots of
simulations in digital form and in physical term and sort of they brought everybody safely home. So if you don't believe me, there is a nice movie about it that you can watch. effectively, what this tool for CISOs does is, well, as you know, CISOs now under tremendous pressure, they have lots of variables coming in from live data and from
kind of company preferences data, and they need to make lots of decisions and they deal with a lot of different teams. And if we deconstruct the processes inside organizations, especially if we take the machine learning operations and how they translate throughout organizations, are normally at least four types of stakeholders there, right? There is software engineering team, is data science team, there is cyber team, and there is
and users team, let's just say you're bored. And you need to kind of navigate all this complexity and risk around all these teams and what they do in the business pipelines. So the tool essentially is helping not only to kind of understand what these processes are, but helps the CISO set priorities. So where is the most risky vulnerable areas in the organization?
in terms of cyber security, and then it helps you prioritize your resources towards, you know, particular types of risks. So that's kind of what that particular tool does. Yeah. So, I also worked on some twins for nuclear decision-making. If you want, I can talk about that too.
Dr Miah Hammond-Errey (15:13)
Yeah, please.
Ganna Pogrebna (15:14)
Yeah, so it's completely different problem, but you also need simulation. So in cyber security domain, the complexity of the twin comes from the fact that we have lots of data, right? We have internal communications, we have reports from basically our cyber security system. So we know how many attacks we are trying to sort of...
you know, ourselves against them. You know, we have the teams that we're trying to estimate the bias of. So we have lots and lots and lots of data. In nuclear domain, when we look at nuclear decision making as a humanity, we only had slightly over 20 closed calls in sort of nuclear deterrence where we almost had a nuclear disaster and thank God, you know, nothing happened.
But if you think in terms of nuclear decision-making, well, Cuban crisis, I think, is the thing that most people would know. But the problem in that domain is that we don't have enough data. We don't have enough data on how decision-makers are behaving in these situations. And ⁓ what that twin does, it allows
current and people who were in the very recent past, nuclear decision makers in different countries to simulate various scenarios to actually build up the database of how people behave in nuclear deterrent situations. This project was done with European Leadership Network. And if you Google it, you will find.
several white papers on this topic that we have written and kind of describes what the tool is. But effectively what it does, it also tries to decrease uncertainty. So it just gives you different scenarios of how people are making decisions without technology, how emerging technologies are potentially disrupting this domain and how people are making decisions.
Dr Miah Hammond-Errey (17:12)
like to go to a segment, What's a new cooperation, competition or conflict you see coming in the next 12 months?
Ganna Pogrebna (17:14)
Mm-hmm.
I think about it, it's definitely international. We sort of were in the weapons race. We are now in the race of weapons of mass influence, if you will, or AI weapons race, right, in terms of trying to kind of outdo China.
But you can tell how much money is being invested. I think the US president came out and announced 500 billion investment in this area, in the AI area. In the EU it was 200 billion, I think in...
UK it's around 90 plus billion. At the moment Australia is kind of really strange because I only heard about one billion. So it's not that much money compared to other countries. But I think we definitely feel that competition and who wins is very important and like the swings on the market are telling us that that's definitely on the agenda.
Collaboratively, yeah, like I think that there is definitely the need for collaborating with companies like OpenAI, large corporates that are actually developing LLMs because, well, we see a lot of changes in the market. For example, OpenAI is kind of stealing the traffic from other entities like we'll just say media, right? And we need definitely some collaboration. So you would have to basically sign some contracts to redirect the attention. And that affects obviously the cybersecurity domain as well because a lot of people take to... Open AI in these days to ask for advice on cybersecurity instead of going and reading something from a reputable website where people are researching these types of things. yeah, like this collaborative things would be very important. And also it's important for sustainability of AI because obviously we have, LLMs have taken a lot of resources to develop.
But we still don't have very good use cases for them to deliver return on investment. And so that's kind of a collaborative piece.
Dr Miah Hammond-Errey (19:46)
Mis and disinformation in our ecosystem is an existential challenge. How can behavioural data science help to counter it? And what are the kind of connections you're seeing between data science and this kind of new neurotechnology, the influence capacity of neurotechnologies?
Ganna Pogrebna (20:06)
Well, you know, most of the time when we talk about misinformation, it has nothing to do with technical sophistication. Right. So, a lot of attacks, successful attacks that you observe, like on the technological side, they're very, I'm not going to say primitive, but they're very simple. Right. So the technology behind is very simple.
What is sophisticated is the social engineering behind these things, because it's just kind of thinking about what people might do. And yeah, the problem really is that, you know, when, so I, for example, design a lot of algorithms that kind of help us prevent attacks.
But those algorithms are limited because they can only be trained on existing data. So if we have, let's just say, if Miah, you experienced some attacks on your website or your tool, something like that. So if we have observations of that, we can train a model that will help us prevent the attack. But if it's a completely new...
type of attack, which normally is a combination of different types of threats in one kind of big ball of sort of distraction that someone designed, then it's very difficult to detect. Then you need humans. research found that all my research and research from kind of several other people that are doing human factors work in cyber security finds that
people on average are about 30 % more efficient than algorithms when we talk about completely new risks. And yeah, so kind of disinformation definitely is one of the things that you can only tackle with kind of human involvement because it's...
Yeah, it's kind of one of those things that are difficult to predict without having data. And the good thing about it is that we do have research on what is called human as a cyber security sensor. It's not my research. It's a research by a guy called George Lucas. based in University of Greenwich in the UK.
And basically the idea of this human as a cybersecurity sensor is that his team analyzed multiple attacks that are deployed on humans and they figured out there are 21 varieties or types of these attacks. And they designed the test and they released this test on social media. I think it was Twitter originally. I think they do it pretty regularly, but...
The point is that when you give this test to people and the test works like this, you have a screen with certain types of information. It could be a mimic an efficient email or, you know, different types of attacks. And you look at the screen and the question is, do you think this could be a start of an attack? So is it an attack or not? And in that context, about on average 70 % of people get it right all the time.
And it's basically just putting people into the mindset that it may or may not be an attack, right? And what happens next. But the cool thing about this test is that they figured out through this research that there are people who always get it right, like 100 % of the time. And obviously the task for behavioural science now is to figure out what is it about those people, right? How are they different from the rest of the population? So can we train everybody to...
you know, see these attacks or should we have like special departments in each company where, you know, there is a like human cybersecurity sensor sitting there and they just receive these different tips. So this may be an attack and they can do this recognition.
Dr Miah Hammond-Errey (23:53)
You've had a really varied and international career. Do you have any reflections on how to bridge what is often seen as a gap between research and then adoption, especially by government agencies or security and emergency services?
Ganna Pogrebna (24:07)
Right, look, think definitely evidence-based policy is a big problem because a lot of policy decisions in my experience, they're just made based on intuition rather than data. Well, I've never seen, you know, a politician come out and say, we have all this data and that's why we decided to do this.
And most of the time, even when we have data in political domain, we are asking the general public, not the people who are maybe directly affected by certain types of policies. Well, generally, I think that the disconnect is, I mean, I understand the disconnect because, you know, I myself am like a playing coach, right? I kind of write these models, but I'm interested in practice.
And so I devote kind of significant amount of my time to develop tools for industry. And, but very few academics do that. Like a lot of academics are interested in providing vision or providing some, you know, talking about trends or thinking about theory, but they're not like developing stuff. And I can't blame them because, you know, it's not, it's not really like academic work.
Like you need to be sufficiently interested in the real world to do that. I'm an economist by training, but I got a job in engineering department where people sort of, you know, wanted to do real stuff in the real world. And that's how kind of my research became really kind of applied.
but it's not for everybody. And I think that disconnect happens because if you go to a consultancy company, they give you a finished product, right? They kind of develop things for you and they deliver a solution. But in the context of academic work, you know, it's mostly just advice and...
It's kind of like I understand industry and policymakers, why they can't fully engage with academia. And equally, academics who are not sufficiently motivated to go into that direction. So I think that it's just a problem of KPIs and priorities in both ends.
yeah, another thing I wanted to quickly mention is the timeframe. Because if you are developing real world solutions, normally you need to act fast. You have maybe three to six months to develop a product. Research doesn't work like that. We have maybe two years before we publish a good paper. So it's a different time scale. Very few academics are good with quick thoughts.
Dr Miah Hammond-Errey (26:53)
I want to go to a segment called Emerging Tech for Emerging Leaders. What do you see as the biggest shifts for leadership from the introduction of new technologies?
Ganna Pogrebna (26:57)
I keep saying this but I think that it's the main challenge is that you have to realize, and I kind of, as a, as an executive, I realized that a little bit too late, I think you are not leading human teams anymore. You are leading human machine teams. So every, every person that you're leading, is a combination of human and machine at the moment. people have different strengths against machines, right? So they, someone is really great on creativity, someone is great at kind of transferable skills, someone is great at prompt engineering and all of that kind of stuff. So you would have different strengths on your team regarding how they collaborate with this machine component. But you have to realize that
Yeah, like it's not just humans and it's not just algorithms. So it's a combination. And I think the complexity is to understand where there is kind of human bias, potentially kind of leading into this, this problem and where there is kind of machine bias and you need to kind of see it and offset it with your vision. Yeah. So it's sort of that kind of combination of complexity that I think the current leadership is facing.
Dr Miah Hammond-Errey (28:22)
Yeah, that's a huge and very interesting shift. Let's jump to bias because you recently published a book called The Big Bad Bias Book. Can you give us a bit of an overview?
Ganna Pogrebna (28:33)
Yeah, actually that's a good story probably for this podcast because it came out of cybersecurity work. So I have this colleague, her name is Karen Renaud. She's very, very famous professor in the UK. her work is mostly on cybersecurity and how people are navigating the field, especially with passwords, like how people are setting up passwords, like she's also interested in.
How do you provide secure systems for elderly? know, so she's this kind of amazing person that works on human factors in cyber. And we've been talking together for a while and we kind of decided that, the cool topic really, very important topic in cyber security is understanding human bias and how it kind of comes into different systems.
And we decided that like when people are trying to find out about bias or just try Googling bias, right? You will find lots of sources and there's just so much information staring at you. It's very difficult to make sense of it. And we decided that, okay, we need to do some systematic way of understanding bias. And there was cognitive, I think it's called cognitive codecs or something
But it was completely incomprehensible. It was this big round sort of graph with lots of biases randomly allocated. And we decided, okay, we're going to do a table. first we just released the table, had lots of discussions with the peers, was like, let people criticize the tool. And the idea was that we will write sort of the book. The book is a little bit like a reference book.
Basically what it has, has a name of the bias, let's just say illusion of control. This is a very kind of famous thing that most of us do. We all have illusion of control. This is basically when we think we control something when in fact there is no way to control it. And you know, you would have the description of the bias. You would have a really sort of real kind of real world application story between two characters.
that we have developed. And then you also have the robustness part, how robust this effect is. And you have two types of examples, like trivia example and business example, and some references for people who want to learn more, like beyond what we've written in the book. through this kind of work, we wrote about 202 human biases.
kind of covers and a table. yeah, so that's basically, yeah, it was like many years of work and work with Illustrator.
Dr Miah Hammond-Errey (31:07)
I want to go to a segment called interdependencies and vulnerabilities. What areas of interdependency or vulnerability and technology and security do you think are critical in the coming few years?
Ganna Pogrebna (31:11)
I already mentioned this third party risk. That's definitely a big one. Another one would be the supply chains for cyber security. So I have a student who is hopefully going to graduate early next year. He just submitted his dissertation
looking into critical software problem. And it's a big problem because we have not only, know, criticality in our supply chains in terms of like where parts are coming from, but also in software. we sometimes rely on software that is developed in countries that can kind of listen in.
what you're doing. And so, I think that's a major dependency that we need to solve. And, you know, we kind of see that in the context of Ukraine a lot, right? So they've been relying on, let's just say, Chinese technology a lot. And we know that China now supplies both, like, the Russian side of the conflict and the Ukrainian side of the conflict. It's very difficult to rely on, you know, software and technology from, let's just say, that particular supplier.
Dr Miah Hammond-Errey (32:26)
Are there any interdependencies and vulnerabilities between technology and security that you wish were better understood?
Ganna Pogrebna (32:33)
let me just say this again from the human factors perspective. We on many occasions are developing tools for cyber security that are backward engineered to stakeholders. Like for example, we would have some sort of cyber security tool that is deployed in the company, but we're not thinking at all how people work and how customers interact with the company. And so I think that we need to better understand how to co-design.
attitude is like if you are talking about adoption, you already lost the plot. You need to be talking about co-design. How do we co-design for better security with people who are going to be using the tools and then deploy the tools once we've developed them for those users, for those stakeholders. And I don't think we do any work. Like I can count people probably with the palm of like one hand. who do that ⁓ relevant work in that area. Normally we're like, we have the tool, let's figure out what segment we're going to push it
Dr Miah Hammond-Errey (33:39)
we then as humans team better with machines?
Ganna Pogrebna (33:42)
Yeah, I mean, it really depends on the demographic characteristics like on your risk profile, right? In terms of like who you are to have a better understanding of that because, you know, for some people it comes quite naturally, right? So we're sort of our...
pretty aware of what we're doing in, like, let's just say online generally. But ⁓ we have, say, vulnerable people, like, again, like the elderly, like segment, very difficult to, it's very difficult to devise like the right products for this segment because, you know, they just don't have a lot of experience or they,
more likely to fall into the trap of a good social engineer and all that kind of stuff. So I think the first thing to do is like really try tools, and try to understand how they work, at least the basics. Like for example, with LLMs,
Lots of people think that it's some sort of like live communication, but in fact, it's a pre-specified model, pre-trained model. Yeah, it's trained on lots and lots of data, but it's not learning anything from you. And it's also not, it doesn't understand what it says, right? It says something that mimics human communication, but in fact, the LLM model...
⁓ like chat GPT, for example, it doesn't understand what it says to you. And you need to be aware of that. So if we give it like a logical model, logical problem, it probably will not be able to solve it correctly because it just doesn't understand how human logic works. And yeah, and another thing is like, yeah, you need to realize that machines and people think differently.
Dr Miah Hammond-Errey (35:31)
Yeah.
Ganna Pogrebna (35:39)
So as long as you are aware of that, think you will do much better than the average user or developer.
Dr Miah Hammond-Errey (35:46)
Let's go to a segment. what alliances do you think will be most in cybersecurity, AI, and emerging tech in the next few years.
Ganna Pogrebna (35:53)
Yeah, think definitely. So cybersecurity generally is not ⁓ an issue that could be solved by any sector or segment alone.
the corporate understanding and the policymaking understanding needs to come together really because companies hold data and the government has the expertise to understand it. so we, yeah, I think that's kind of what we need to be thinking about. Like how do we...
collaborate together, especially in terms of information sharing, because the problem is that adversaries are extremely good at information sharing. I always give this example, but there is a hotel in Austria that was hacked four times in exactly the same way. So it was a ransomware attack. And the ransomware attack is, you you have these electronic keys that you use to enter the rooms and... ⁓
basically the adversaries just locked the system and the hotel paid the ransom four times. And very often people say like, it's stupid, but I'm saying, no, it's not stupid if you're a hotel owner because people want to get into the rooms. And so yeah, like you're trying to quickly solve the problem. So after these four attacks, they got physical keys back in. So now they...
So that by not using electronic system at all, people are just running around with physical key. But, you sort of see the adversaries are sharing information, right? Someone hacked into, into the system and they shared the information on the dark web and others have instigated exactly the same attack on the same hotel. But when we're trying to share information on cybersecurity domain in kind of from the benign side.
Everyone is sitting on the data. That's the first thing. Second thing, even when attack is in progress, let's just say on, Commonwealth Bank. Commonwealth Bank is not going to call A N Z and say, ⁓ guys, you know what? We are being attacked, so you're probably next. Because it's like corporate issues, like legal issues with that, right? You can't tell a competitor things like this.
And so until we figure out how to do this information sharing on the good side, where the good guys are, yeah, it's going to be very difficult.
Dr Miah Hammond-Errey (38:12)
As you know, my company, StratFutures, has developed a platform to support leaders who make high stakes decisions to improve their cognitive readiness and resilience.
you've worked with so many different sectors, including lots of movie companies. Do you have some good examples of positive change we've seen companies work for for people for their users or customers?
Ganna Pogrebna (38:37)
the best results I saw like in cyber security were coming from not so much from the data analysis, but from sort of behavioural understanding of why things are happening and working with KPIs. So probably the best thing I saw recently was an insurance company in the U.S. where they decided that they're going to have like this monthly competition in reporting cybersecurity threats. they have like, they basically have this list that the CEO is publishing every month. And every department is trying to be like in the first three or at least not at the bottom of the list, you know, and so the managers are incentivizing their staff and whatever means possible to kind of report potential threats. And they really created a positive culture because they celebrate, you know.
Dr Miah Hammond-Errey (39:13)
Yeah.
Ganna Pogrebna (39:29)
people who are reporting stuff and reporting actual threats that could hurt the organization. Yeah, so, you know, those types of things.
Dr Miah Hammond-Errey (39:35)
That is such a good reminder to come back to KPIs and business incentives because it actually drives change. Coming up is eyes and ears. What have you been reading, listening to or watching lately that might be of interest to my audience?
Ganna Pogrebna (39:41)
So.
this area is just moving so quickly. It does help to kind of, you know, to listen to your podcast And I tend to maybe sort of... just read classical literature more or less. It's not about... So yeah, I think the new trends you can definitely get from but sort of the philosophy of how things work with human psychology is like, you just read some, yeah, basically classics.
Dr Miah Hammond-Errey (40:25)
Let's go to a segment called disconnect. How do you wind down?
Ganna Pogrebna (40:28)
Well, I don't. I don't... Well, no, I think that what really helps is nature, know, especially in Australia, like just going somewhere. So I was, for example, in North Territory recently where there is nothing, like you don't have internet, you don't have anything.
kind of, and you can just see the Milky Way at night and all of this. It's amazing. We are very lucky to live in one of the most beautiful places on earth. So definitely that. the reality often is, especially when you are an SME, work in SME context or you are working in government context, you don't have an opportunity to switch off. Most people I know, they work 24-7.
And so, yeah, I would just advise maybe to go for a walk, yeah, switch everything off, like, yeah, just have some time off gadgets.
Dr Miah Hammond-Errey (41:19)
final segment is called Need to Know. Is there anything I didn't ask that would have been great to cover?
Ganna Pogrebna (41:24)
the main thing is just to engage and to try and not be afraid of trying.
and not be afraid of making mistakes because we all make mistakes and that's okay. you're never gonna be 100 % ready for all the technology that surrounds you. But trying certainly helps your understanding and at least understanding the limitations of what you're using or what you're trying to engage with. And I just think that that's probably the most important thing that you can do to your life, just try.
Dr Miah Hammond-Errey (41:55)
Yeah, thank you so much. Ganna thank you for joining me so much. It's really been a pleasure.
Ganna Pogrebna (42:00) Likewise, thanks for having me.