Connect with us

World News

How the Social Sector Thinks About Tech Is Wrong

Published

on

Instead of investing resources in building a solution from scratch, it’s smarter to research existing solutions and tools that can be modified for specific needs, the authors say. Credit: Unsplash / Marvin MOpinionTuesday, October 26, 2021Inter Press Service

When building budgets for programmes, nonprofits (and donors) must change their mindsets and look at tech as core infrastructure; without this orientation, organisations lose out because they are bearing the cost of technology anyway. It makes no sense not to account for it properly.

We misunderstand technology

1. Tech is an enabler, not the solution

When it comes to nonprofits implementing tech, there are a few misconceptions or assumptions we have encountered during our work at Tech4Dev. The first misconception is that tech is the solution. Tech is, in fact, an enabler—it enables an effective, efficient solution. It cannot by itself solve problems. For example, using tech for mobile data collection is superb.

However, to use this technology effectively, an organisation must have the processes and systems in place to know what data to collect, the audience from whom they will collect the data, and the field staff trained in the system and reasonably knowledgeable about data collection and biases. In such a scenario, tech enables high-quality data collection, but the secret is in the organisation process.

2. It’s not about the size of an organisation

The second misconception is that there is a ‘right size’ an organisation needs to get to before implementing tech solutions. In other words, tech is not for smaller grassroots organisations. A better way to think about this would be to ask yourself: Do I currently have a solution for the problem at hand, and do I have a systematic way of implementing that solution? If the answer is yes, then size should not be a factor at all.

For instance, we’ve seen small organisations use Google Sheets extremely effectively. So you can use cheap tech at a small scale, and you can also use cheap tech on a large scale. We’ve also seen really poor tech being used in both small organisations as well as large ones.

So it’s not about size but about having a systematic approach, because even though tech makes things more efficient, it also tends to add more complexity and introduce another element that employees will have to learn and work with.

We were working with a nonprofit organisation—let’s call it Team Health—that had a large number of fieldworkers, from whom they would receive data via multiple channels including WhatsApp, emails, and phone calls. None of this data arrived in a standardised or structured manner, nor was any of it recorded. Team Health wanted to change this.

They were keen to introduce an app, assuming that all their fieldworkers would know how to enter the requisite information in the exact way that the tech required, and that would lead to them having standardised data exactly how they needed it.

But because their processes at the time were not standardised, and their fieldworkers were accustomed to a certain way of submitting data, the app would not solve their problem. In fact, it might have made things worse had they gone down that path.

3. Asking donors to ‘fund tech

The third misconception among organisations is that funders are hesitant to pay for tech. Instead of asking donors to ‘fund technology’, nonprofits should articulate why technology is important to the organisation’s core functioning.

They must incorporate it as such in their proposals. We need to educate the funder ecosystem as well as the nonprofit ecosystem for this to become a reality.

Take the case of an organisation—Team Sanitation—working on community toilets for the urban poor in India. They used a fair amount of technology for data collection and geographic information system (GIS) mapping in their day-to-day operations.

These tools were core to their project, and so Team Sanitation started incorporating all costs associated with using these technologies (for example, licensing and operational costs) as necessary project costs in their funding proposals.

And they haven’t got any pushback from donors for doing so. As long as organisations can demonstrate the need for tech within their programmes, most donors will not have any issues supporting such core expenses.

4. Thinking that a custom tech solution needs to be built from scratch

The fourth mistake many organisations make is to think that they need to build custom tech solutions from scratch. But before thinking about this nonprofits need to define their problems and needs.

Detailing what their top problems are, why they are important, and how they impact the work that they are trying to do can help them understand where tech might help, and where it might not. If tech is in fact the way to go, then it’s important to acknowledge that very few nonprofits have a unique problem that they need solved.

The context, communities, and resources might differ, but fundamentally the problem a nonprofit is trying to solve has likely been attempted or solved by somebody else already.

For instance, let’s take the case of an organisation that is in the business of training primary school teachers, and finds that doing this at scale, in person, is cost-prohibitive. Surely, there are others that have faced this issue of cost and scale, and have worked on a solution.

Even still, in the nonprofit sector, there is a tendency to build custom tech platforms when they are not needed. Both funders and nonprofits have been burnt by this, where a solution was built, and in some cases the investment had to be written off, and in others there was little progress to show for it.

Custom tech is not only a waste of resources, time, and effort, but it is also not scalable. For this reason, instead of investing resources in building a solution from scratch, it’s smarter to research existing solutions and tools that can be modified for specific needs.

We’ve seen multiple custom builds of mobile data collection platforms, case management systems, and customer relationship management (CRM) systems across different nonprofits, most of which were inferior and lacking compared to the current open-source and commercially available solutions. ‘Research before build’ is a mantra we follow quite religiously within Tech4Dev.

We need to build a culture of collaboration and sharing knowledge where everyone benefits

Given that there are existing solutions to problems that several nonprofits are trying to solve, the question arises: What are the barriers to accessing such information?

Most nonprofits do not have the technological knowledge or expertise that is helpful in thinking about what tools might be useful for their specific problem. Connecting the dots between the problem and potentially useful technologies is usually the responsibility of the software partner.

However, since software partners often have limited experience in the social sector, their approach to an organisation’s problem is to simply build a solution specifically for the nonprofit. This is far from ideal. Not only do we need software partners that are well versed with the social sector and the problems nonprofits are trying to solve, but we also need nonprofits to strengthen their understanding of tech.

In order to do this, we need to build a knowledge base for tech that everyone can learn from—nonprofits, donors, and software partners. This kind of open ecosystem will also help funders realise when they are funding similar solutions across multiple organisations, and it will help organisations learn from each other’s work.

We must prioritise open-source publishing of the work

To build an accessible ecosystem, the first step is to share existing knowledge with all the relevant stakeholders. Nonprofits should publish their programmes, challenges, solutions, and learning in the public domain. For example, if a nonprofit is spending 300 hours working on a project, it should spend at least 10 hours creating open-source material that helps people understand what it is that they are doing.

Creating awareness through open-sourced content is crucial for organisations in the social sector so they can learn from and support each other better. While this might not happen right away, as more and more nonprofits share their expertise, the social sector can start to build these broader ecosystems faster. Organisations must ideally move beyond the fear of sharing their ‘trade secrets’, in recognition of the fact that paying it forward will benefit them in the long run.

Donors and intermediary organisations have an important role to play

Organisations like IDinsight do an amazing job publishing their work on a timely basis as seen from their blog and LinkedIn pages. Sharing this information helps distribute knowledge across a wide variety of ecosystem players, hence strengthening the ecosystem.

Donors can nudge these organisations to publish their work as it is being done to help disseminate the knowledge as early as possible. We should never wait till we have the perfect, well-crafted report. Publishing things as the work is being done is another mantra for the projects we run within Tech4Dev.

In India today, the onus of facilitating the building of an ecosystem falls more on funders and intermediary organisations than it does on nonprofits. This is because nonprofits are resource-constrained and devote majority of their efforts to their programmes. Moreover, they do not have the kind of influence and clout that donors have, and might not have the skills either.

The first step that funders can take is to move away from traditional contracts that restrict sharing of content and intellectual property (IP) and towards sharing IP in the public domain. Further, given that funders typically work with multiple organisations within a specific sector, they might be better positioned to see the bigger picture here.

They can also help nonprofits choose software partners. Here, they must be sensitive to the skewed funder–nonprofit power dynamic, and play a supportive role rather than a directive one. There is a lot that funders can do to strengthen the tech ecosystem within the social sector. Unfortunately, there are very few donors and organisations focused on this ecosystem.

We need a much greater push towards building ecosystems and platforms at a much faster rate, and providing adequate support to sustain them. The social sector needs such spaces so they can integrate technology better and more smartly across the work they do.

Donald Lobo serves as executive director of the Chintu Gudiya Foundation, a private family foundation based in San Francisco, CA, that funds US-based nonprofits and organisations developing open-source software for the public good. 

Sanjeev Dharap is an entrepreneur and start-up adviser, and has worked in Silicon Valley for over 25 years. He holds an MTech in Computer Science from Pune University, India, and a PhD in Computer Science from Penn State University. He has been involved with Tech4Dev since early 2019

This story was originally published by India Development Review (IDR)

© Inter Press Service (2021) — All Rights ReservedOriginal source: Inter Press Service

Original Post: globalissues.org

World News

UNESCO Member States Adopt Recommended Ethics for AI

Published

on

The agreement outlines the biases that AI technologies can “embed and exacerbate” and their potential impact on “human dignity, human rights and fundamental freedoms, gender equality, democracy … and the environment and ecosystems.”by SWAN – Southern World Arts News (paris)Friday, November 26, 2021Inter Press Service

The adopted text, which the agency calls “historic”, outlines the “common values and principles which will guide the construction of the necessary legal infrastructure to ensure the healthy development of AI,” UNESCO says.

UNESCO Director-General Audrey Azoulay. Credit: AM/SWAN

The text states that AI systems “should not be used for social scoring and mass surveillance purposes,” among other recommendations.

The organization’s 193 member states include countries, however, that are known to use AI and other technologies to carry out such surveillance, often targeting minorities and dissidents – including writers and artists. Governments and multinational companies have also used personal data and AI technology to infringe on privacy.

While such states and entities were not named, UNESCO officials acknowledged that the discussions leading up to the adopted text had included “difficult conversations”.

Presenting the agreement Nov. 25 at the organization’s headquarters in Paris, UNESCO’s Director-General Audrey Azoulay said the initiative to have an AI ethics framework had been launched in 2018.

“I remember that many thought it would be extremely hard if not impossible to attain common ground among the 193 states … but after these years of work, we’ve been rewarded by this important victory for multilateralism,” Azoulay told journalists.

She pointed out that AI technology has been developing rapidly and that it entails a range of profound effects that comprise both advantages to humanity and wide-ranging risks. Because of such impact, a global accord with practical recommendations was necessary, based on input from experts around the world, Azoulay stressed.

The accord came during the 41st session of UNESCO’s General Conference, which took place Nov. 9 to 24 and included the adoption of “key agreements demonstrating renewed multilateral cooperation,” UNESCO said.

While the accord does not provide a single definition of AI, the “ambition” is to address the features of AI that are of “central ethical relevance,” according to the text.

These are the features, or systems, that have “the capacity to process data and information in a way that resembles intelligent behaviour, and typically includes aspects of reasoning, learning, perception, prediction, planning or control,” it said.

While the systems are “delivering remarkable results in highly specialized fields such as cancer screening and building inclusive environments for people with disabilities”, they are equally creating new challenges and raising “fundamental ethical concerns,” UNESCO said.

The agreement outlines the biases that AI technologies can “embed and exacerbate” and their potential impact on “human dignity, human rights and fundamental freedoms, gender equality, democracy … and the environment and ecosystems.”

According to UNESCO, these types of technologies “are very invasive, they infringe on human rights and fundamental freedoms, and they are used in a broad way.”

The agreement stresses that when member states develop regulatory frameworks, they should “take into account that ultimate responsibility and accountability must always lie with natural or legal persons” – that is, humans – “and that AI systems should not be given legal personality” themselves.

“New technologies need to provide new means to advocate, defend and exercise human rights and not to infringe them,” the agreement says.

Among the long list of goals, UNESCO said that the accord aims to ensure that digital transformations contribute as well to the achievement of the Sustainable Development Goals” (a UN blueprint to achieve a “better and more sustainable future” for the world).

“We see increased gender and ethnic bias, significant threats to privacy, dignity and agency, dangers of mass surveillance, and increased use of unreliable AI technologies in law enforcement, to name a few. Until now, there were no universal standards to provide an answer to these issues,” UNESCO stated.

Regarding climate change, the text says that member states should make sure that AI favours methods that are resource- and energy-efficient, given the impact on the environment of storing huge amounts of data, which requires energy. It additionally asks governments to assess the direct and indirect environmental impact throughout the AI system life cycle.

On the issue of gender, the text says that member states “should ensure that the potential for digital technologies and artificial intelligence to contribute to achieving gender equality is fully maximized.”

It adds that states “must ensure that the human rights and fundamental freedoms of girls and women, and their safety and integrity are not violated at any stage of the AI system life cycle.”

Alessandra Sala, director of Artificial Intelligence and Data Science at Shutterstock and president of the non-profit organization Women in AI – who spoke at the presentation of the agreement – said that the text provides clear guidelines for the AI field, including on artistic, cultural and gender issues.

“It is a symbol of societal progress,” she said, emphasizing that understanding the ethics of AI was a shared “leadership responsibility” which should include women’s often “excluded voices”.

In answer to concerns raised by journalists about the future of the recommendations, which are essentially non-binding, UNESCO officials said that member states realize that the world “needs” this agreement and that it was a step in the right direction.

© Inter Press Service (2021) — All Rights ReservedOriginal source: Inter Press Service

Article: globalissues.org

Continue Reading

World News

Digital Child’s Play: Protecting Children From the Impacts of AI

Published

on

UNICEF/ DiefagaUNICEF has developed policy guidance to protect children from the potential impacts of AIFriday, November 26, 2021UN News

Children are already interacting with AI technologies in many different ways: they are embedded in toys, virtual assistants, video games, and adaptive learning software. Their impact on children’s lives is profound, yet UNICEF found that, when it comes to AI policies and practices, children’s rights are an afterthought, at best.

In response, the UN children’s agency has developed draft Policy Guidance on AI for Children to promote children’s rights, and raise awareness of how AI systems can uphold or undermine these rights.

Conor Lennon from UN News asked Jasmina Byrne, Policy Chief at the UNICEF Global Insights team, and Steven Vosloo, a UNICEF data, research and policy specialist, about the importance of putting children at the centre of AI-related policies.

AI Technology will fundamentally change society.

Steven Vosloo, a UNICEF data, research and policy specialist, by UNICEF

Steven Vosloo At UNICEF we saw that AI was a very hot topic, and something that would fundamentally change society and the economy, particularly for the coming generations. But when we looked at national AI strategies, and corporate policies and guidelines, we realized that not enough attention was being paid to children, and to how AI impacts them. 

So, we began an extensive consultation process, speaking to experts around the world, and almost 250 children, in five countries. That process led to our draft guidance document and, after we released it, we invited governments, organizations and companies to pilot it. We’re developing case studies around the guidance, so that we can share the lessons learned.

Jasmina Byrne AI has been in development for many decades. It is neither harmful nor benevolent on its own. It’s the application of these technologies that makes them either beneficial or harmful.

There are many positive applications of AI that can be used in in education for personalized learning. It can be used in healthcare, language simulation and processing, and it is being used to support children with disabilities.

And we use it at UNICEF. For example, it helps us to predict the spread of disease, and improve poverty estimations. But there are also many risks that are associated with the use of AI technologies. 

Children interact with digital technologies all the time, but they’re not aware, and many adults are not aware, that many of the toys or platforms they use are powered by artificial intelligence. That’s why we felt that there has to be a special consideration given to children and because of their special vulnerabilities.

UNICEF/ Diefaga

Children using computers

Privacy and the profit motive

Steven Vosloo The AI could be using natural language processing to understand words and instructions, and so it’s collecting a lot of data from that child, including intimate conversations, and that data is being stored in the cloud, often on commercial servers. So, there are privacy concerns.

We also know of instances where these types of toys were hacked, and they were banned in Germany, because they were considered to be safe enough.

Around a third of all online users are children. We often find that younger children are using social media platforms or video sharing platforms that weren’t designed with them in mind.

They are often designed for maximum engagement, and are built on a certain level of profiling based on data sets that may not represent children.

Jasmina Byrne, Policy Chief at the UNICEF Global Insights team, by UNICEF

Predictive analytics and profiling are particularly relevant when dealing with children: AI may profile children in a way that puts them in a certain bucket, and this may determine what kind of educational opportunities they have in the future, or what benefits parents can access for children. So, the AI is not just impacting them today, but it could set their whole life course on a different direction.

Jasmina Byrne Last year this was big news in the UK. The Government used an algorithm to predict the final grades of high schoolers. And because the data that was input in the algorithms was skewed towards children from private schools, their results were really appalling, and they really discriminated against a lot of children who were from minority communities. So, they had to abandon that system. 

That’s just one example of how, if algorithms are based on data that is biased, it can actually have a really negative consequences for children.

‘It’s a digital life now’

Steven Vosloo We really hope that our recommendations will filter down to the people who are actually writing the code. The policy guidance has been aimed at a broad audience, from the governments and policymakers who are increasingly setting strategies and beginning to think about regulating AI, and the private sector that it often develops these AI systems.

We do see competing interests: the decisions around AI systems often have to balance a profit incentive versus an ethical one. What we advocate for is a commitment to responsible AI that comes from the top: not just at the level of the data scientist or software developer, from top management and senior government ministers.

Jasmina Byrne The data footprint that children leave by using digital technology is commercialized and used by third parties for their own profit and for their own gain. They’re often targeted by ads that are not really appropriate for them. This is something that we’ve been really closely following and monitoring.

However, I would say that there is now more political appetite to address these issues, and we are working to put get them on the agenda of policymakers.

Governments need to think and puts children at the centre of all their policy-making around frontier digital technologies. If we don’t think about them and their needs. Then we are really missing great opportunities.

Steven Vosloo The Scottish Government released their AI strategy in March and they officially adopted the UNICEF policy guidance on AI for children. And part of that was because the government as a whole has adopted the Convention on the Rights of the Child into law. Children’s lives are not really online or offline anymore. And it’s a digital life now.

This conversation has been edited for length and clarity. You can listen to the interview here.

UNICEF/ Schverdfinger

UNICEF has developed policy guidance to protect children from the potential impacts of AI

The Global Forum on AI for Children

On November 30 – December 1, UNICEF and the Government of Finland host the Global Forum on AI for Children.This event gathers the world’s foremost children’s rights and technology experts, policymakers, practitioners and researchers, as well as children active in the AI space, to connect and share knowledge on pressing issues at the intersection of children’s rights, digital technology policies and AI systems.The forum aims to recap project achievements and impacts, share knowledge of what has worked and what hasn’t for more child-centred AI, and enable networking on how the work can continue and inspire participants to act.

© UN News (2021) — All Rights ReservedOriginal source: UN News

Source: globalissues.org

Continue Reading

World News

Growing Amazon Deforestation a Grave Threat to Global Climate

Published

on

Brazil has a “green future,” announced Environment Minister Joaquim Leite and Vice-President Hamilton Mourão, in a videoconference presentation from Brasilia at the Glasgow climate summit, in an attempt to shore up Brazil’s credibility, damaged by Amazon deforestation. The two officials concealed the fact that deforestation in the Amazon rose by 21.9 percent last year. CREDIT: Marcelo Camargo/Agência Brasil-Fotos Públicasby Mario Osava (rio de janeiro)Friday, November 26, 2021Inter Press Service

The report by the National Institute for Space Research (INPE) based on the data for the year covering August 2020 to July 2021 is dated Oct. 27, but the government did not release it until Thursday, Nov. 18.

It thus prevented the disaster from further undermining the credibility of far-right President Jair Bolsonaro’s government, already damaged by almost three years of anti-environmental policies and actions, ahead of and during the 26th Conference of the Parties (COP26) to the climate change convention, held in Glasgow, Scotland from Oct. 31 to Nov. 13.

INPE’s Satellite Monitoring of Deforestation in the Legal Amazon Project (Prodes) recorded 13,235 square kilometers of deforestation, 21.97 percent more than in the previous period and almost three times the 2012 total of 4,571 square kilometers.

The so-called Legal Amazon, a region covering 5.01 million square kilometers in Brazil, has already lost about 17 percent of its forest cover. In a similar sized area the forests were degraded, i.e. some species were cut down and biodiversity and biomass were reduced, according to the non-governmental Amazon Institute of People and the Environment (IMAZON).

Carlos Nobre, one of the country’s leading climatologists and a member of the Intergovernmental Panel on Climate Change (IPCC), says the world’s largest tropical forest is approaching irreversible degradation in a process of “savannization” (the gradual transition of tropical rainforest into savanna).

The point of no return is a 20 to 25 percent deforestation rate, estimates Nobre, a researcher at the Institute of Advanced Studies of the University of São Paulo and a member of the Brazilian and U.S. national academies of sciences.

Reaching that point would be a disaster for the planet. Amazon forests and soils store carbon equivalent to five years of global emissions, experts calculate. Forest collapse would release a large part of these greenhouse gases into the atmosphere.

A similar risk comes from the permafrost, a layer of frozen subsoil beneath the Arctic and Greenland ice, for example, which is beginning to thaw in the face of global warming.

This is another gigantic carbon store that, if released, would seriously undermine the attempt to limit the increase in the Earth’s temperature to 1.5 degrees Celsius this century.

The Amazon rainforest, an immense biome spread over eight South American countries plus the territory of French Guiana, is therefore key in the search for solutions to the climate crisis.

Evolution of the deforested area in the Brazilian Amazon since 1988, with its ups and downs and an upward tendency in the last nine years. Policies to crack down on environmental crimes by strengthened public agencies were successful between 2004 and 2012. Graphic: INPE

Brazil, which accounts for 60 percent of the biome, plays a decisive role. And that is why it is the obvious target of the measure announced by the European Commission, which, with the expected approval of the European Parliament, aims to ban the import of agricultural products associated with deforestation or forest degradation.

The Commission, the executive body of the 27-nation European Union, does not distinguish between legal and illegal deforestation. It requires exporters to certify the exemption of their products by means of tracing suppliers.

Brazil is a leading agricultural exporter that is in the sights of environmentalists and leaders who, for commercial or environmental reasons, want to preserve the world’s remaining forests.

The 75 percent increase in Amazon deforestation in the nearly three years of the Bolsonaro administration exacerbates Brazil’s vulnerability to environmentally motivated trade restrictions.

This was the likely reason for a shift in the attitude of the governmental delegation in Glasgow during COP26.

Unexpectedly, Brazil adhered to the commitment to reduce methane emissions by 30 percent by 2030, a measure that affects cattle ranching, which accounts for 71.8 percent of the country’s emissions of this greenhouse gas.

As the world’s largest exporter of beef, which brought in 8.4 billion dollars for two million tons in 2020, Brazil had previously rejected proposals targeting methane, a gas at least 20 times more potent than carbon dioxide in global warming.

Brazil also pledged to eliminate deforestation by 2028, two years ahead of the target, and stopped obstructing agreements such as the carbon market, in a totally different stance from the one it had taken in the previous two years.

The threat of trade barriers and the attempt to improve the government’s international reputation are behind the new attitude. The new ministers of Foreign Affairs, Carlos França, and Environment, Joaquim Leite, in office since April and June, respectively, are trying to mitigate the damage caused by their anti-diplomatic and anti-environmental predecessors.

But the new data on Amazon deforestation and the delay in its disclosure unleashed a new backlash.

President Jair Bolsonaro stated that the Amazon has kept its forests intact since 1500 and does not suffer from fires because it is humid, in a Nov. 15 speech during the Invest Brazil Forum, held in Dubai to attract capital to the country. He made this claim when he already knew that in the last year deforestation had grown by almost 22 percent. CREDIT: Alan Santos/PR-Fotos Públicas

Leite claimed not to have had prior knowledge of the INPE report, difficult to believe from a member of a government known for using fake news and disinformation. He announced that the government would take a “forceful” stance against environmental crimes in the Amazon, commenting on the “unacceptable” new deforestation figures.

Together with the Minister of Justice and Public Security Anderson Torres, who has the Federal Police under his administration, he promised to mobilize the necessary forces to combat illegal deforestation.

The reaction is tardy and of doubtful success, given the contrary stance taken by the president and the deactivation of the environmental bodies by the previous minister, Ricardo Salles, who defended illegal loggers against police action.

The former minister stripped the two institutes executing environmental policy, one for inspection and the other for biodiversity protection and management of conservation units, of resources and specialists. He also appointed unqualified people, such as military police, to command these bodies.

President Bolsonaro abolished councils and other mechanisms for public participation in environmental management, as in other sectors, and encouraged several illegal activities in the Amazon, such as “garimpo” (informal mining) and the invasion of indigenous areas and public lands.

The result could only be an increase in the deforestation and forest fires that spread the destruction in the last two years. The smoke from the “slash-and-burn” clearing technique polluted the air in cities more than 1,000 kilometers away.

Bolsonaro, however, declared on Nov. 15 in Dubai, in the United Arab Emirates, that fires do not occur in the Amazon due to the humidity of the rainforest and that 90 percent of the region remains “the same as in 1500,” when the Portuguese arrived in Brazil.

His vice-president, General Hamilton Mourão, acknowledged that “deforestation in the Amazon is real, the INPE data leave no doubt.” His unusual disagreement with the president arises from his experience in presiding over the National Council of the Legal Amazon, which proposes and coordinates actions in the region.

Brazil had managed to reduce Amazon deforestation since the 2004 total of 27,772 square kilometers. A concerted effort by environmental agencies reduced the total to 4,571 square kilometers in 2012. This shows that it is possible, but it depends on political will and adequate management.

© Inter Press Service (2021) — All Rights ReservedOriginal source: Inter Press Service

Article: globalissues.org

Continue Reading

Trending

Chimed.com