Archive for the ‘European Union’ Category

How the European Union Allowed Hungary to Become an Illiberal Model – The New York Times

Mr. Weber still regrets the loss of Fidesz. On one level, it is a relief, he said. But Orban leaving is not a victory, but a defeat in the effort to hold the center-right together as a broad peoples party.

It has helped Mr. Orban that the European Union has few and ineffective instruments for punishing a backsliding nation. Even the Lisbon Treaty, which gave enhanced powers to the European Parliament, has essentially one unusable tool: Article 7, which can remove a countrys voting rights, but only if passed by unanimity.

In 2017, Frans Timmermans, then the European Commission first vice president responsible for the rule of law, initiated the article against Poland. The European Parliament did the same against Hungary in 2018.

But both measures inevitably stalled because the two countries protect each other.

The treaty also allows the commission to bring infringement procedures legal charges against member states for violating E.U. law. But the process is slow, involving letters and responses and appeals, and final decisions are up to the European Court of Justice. Most cases are settled before reaching the court.

But according to studies by R. Daniel Kelemen of Rutgers University and Tommaso Pavone of the University of Oslo, the commission sharply reduced infringement cases after the addition of new member states in 2004. Jos Manuel Barroso, a former commission president, bought into this to work more cooperatively with governments and not just sue them, Mr. Kelemen said. Mr. Barroso declined to comment.

Attitudes have shifted. With taxpayer money at stake, the next seven-year budget in the balance and the disregard for shared values shown by Mr. Orban and Mr. Kaczynski on leaders minds, Brussels may have finally found a useful tool to affect domestic politics, with a mix of lawsuits charging infringement of European treaties combined with severe financial consequences.

A marker has finally been laid down, Mr. Reynders said.

The big moment comes this month, when the European Court of Justice issues its ruling.

See the original post:
How the European Union Allowed Hungary to Become an Illiberal Model - The New York Times

An interview with Covington & Burling discussing artificial intelligence in the European Union – Lexology

Marty Hansen represents several of the worlds leading information technology companies on a broad range of technology regulatory issues, including intellectual property, artificial intelligence, law enforcement access, international trade and competition issues. Drawing on over two decades of experience, Marty also represents online services platforms and IT trade associations on a range of electronic commerce, platform and online liability issues.

Lisa Peets leads the technology and media practice in the firms London office. Ms Peets divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including some of the worlds best-known technology, media and life science companies. Ms Peets counsels clients on a range of EU law issues.

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies. Ms Choi advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues.

Jiayen Ong is a trainee solicitor in the London office, who attended Queen Mary, University of London. She has experience across a broad range of practices from competition law, dispute resolution and arbitration, corporate law and technology regulatory issues.

Currently, the European Union does not have laws or regulations that specifically regulate AI. However, a range of laws and regulations both horizontal and sector-specific may apply to AI technologies and applications. These include (among others) the following:

Other laws that may apply to AI applications, depending on the context, include product safety and liability rules, medical devices rules, financial services regulations, cybersecurity laws and consumer protection law.

In April 2021, the European Commission proposed a Regulation Laying Down Harmonised Rules on AI (the AI Act), which would establish rules on the development, placing on the market, and use of AI systems. The AI Act imposes different obligations on providers of different types of AI systems. The bulk of the provisions apply to providers of high-risk AI systems. Prior to placing a high-risk AI system on the EU market or putting it into service, providers are required to undertake a conformity assessment procedure (either self-assessment or third-party assessment) of subject their systems. To demonstrate compliance, providers must draw up an EU declaration of conformity and affix the CE marking of conformity. The AI Act also prohibits certain AI practices that are deemed to pose an unacceptable level of risk, and contravene EU values. The AI Act would also apply to systems, wherever marketed or used, where the output produced by the system is used in the Union. The proposed AI Act is not yet law, and will likely be amended by the Council of the EU and the European Parliament (EP).

Like the EU, the UK has not yet adopted AI-specific legislation. Following the UKs exit from the EU on 1 January 2021, the UK retains some EU laws such as the GDPR by operation of the European Union (Withdrawal) Act 2018. However, the UK government has announced plans to reform UK data protection law. In the UKs National AI Strategy, published in September 2021, the government outlines an innovation-friendly approach to AI regulation that is likely to impose fewer requirements on AI developers and users than are currently set forth in the EUs proposed AI Act. The Office for AI is expected to publish a White Paper on regulating AI in early 2022.

In 2018, the European Commission published a Coordinated Plan on Artificial Intelligence, which set out a joint commitment by the Commission and the member states to work together to encourage investments in AI technologies, develop and act on AI strategies and programmes, and align AI policy to reduce fragmentation across jurisdictions. In April 2021, the European Commission conducted a review of the progress on the 2018 Coordinated Plan, and set out an updated plan with the following additional policy objectives:

The Commission has also proposed that the EU invests at least 1 billion per year from the Horizon Europe and Digital Europe programs in AI. The review found that 19 of the 27 EU member states have adopted national strategies on AI and the remaining national strategies are in progress and are expected to be published soon.

On data-sharing, in early 2020, the Commission published a communication on shaping Europes digital future and a European strategy for data. The Communication also recommends enhancing regulatory frameworks to, among other things, encourage data sharing. Over the past year, the European Commission has proposed legislation aimed at furthering the European strategy for data:

As noted in the response to the previous question, the UK government has published its own National AI Strategy. That strategy emphasises the importance of ensuring access to and availability of data. One of the actions included in the AI Strategy is for the UK government to publish a policy framework setting out plans to enable better data availability in the wider economy. This framework will include supporting the activities of data intermediaries, including data trusts, and providing stewardship services between those sharing and accessing data.

The Commissions proposed AI Act (discussed in the response to question 1) seeks to address not only health and safety risks posed by AI, but also risks to fundamental rights. Under the proposed AI Act, different sets of obligations apply to different types of AI systems, as follows.

Some AI applications are prohibited outright. These include the provision or use of AI systems that either deploy subliminal techniques (beyond a persons consciousness) to materially distort a persons behaviour, or exploit the vulnerabilities of specific groups (such as children or persons with disabilities), in both cases where physical or psychological harm is likely to occur. The AI Act also prohibits public authorities from using AI for social scoring, where this leads to detrimental or unfavourable treatment in social contexts unrelated to the contexts in which the data was generated, or is otherwise unjustified or disproportionate. Finally, it bans law enforcement from using real-time remote biometric identification systems in publicly accessible spaces, subject to limited exceptions (eg, searching for specific potential victims of crime, preventing imminent threats to life or safety, or identifying specific suspects of significant criminal offences).

Certain AI systems are classified as inherently high-risk. These systems are enumerated exhaustively in Annexes II and III of the AI Act, and include AI systems that are, or are safety components of, certain regulated products (eg, medical devices, motor vehicles) and AI systems that are used in certain specific contexts or for specific purposes (eg, for remote biometric identification, for assessing students in educational or vocational training). The AI Act imposes a range of obligations on providers of high-risk AI systems. In particular, providers must design high-risk AI systems to enable record-keeping; allow for human oversight aimed at minimising risks to health, safety, or fundamental rights; and achieve an appropriate level of accuracy, robustness and cybersecurity. Data used to train, validate or test such systems must meet quality criteria, including for possible biases, and be subject to specified data governance practices. Providers must prepare detailed technical documentation, provide specific information to users, and adopt comprehensive risk management and quality management systems. Compliance with these obligations will be assessed through a conformity assessment procedure, and a high-risk AI system must be CE marked for conformity before it can be placed on the EU market. The AI Act also envisages obligations on importers and distributors to ensure that high-risk AI systems have undergone the conformity assessment procedure and bear the proper conformity marking before being placed on the market.

The AI Act imposes transparency obligations on certain non-high-risk AI systems. Specifically, providers of AI systems intended to interact with natural persons must develop them in such a way that people know they are interacting with the system, and providers of emotion recognition and biometric categorisation AI systems must inform people who are exposed to them of their nature, and providers of AI systems that generate or manipulate images, audio or video content must disclose to people that the content is not authentic. For other non-high risk AI systems, the AI Act also encourages providers to create codes of conduct to foster voluntary adoption of the obligations that apply to high-risk AI systems.

At the member state level, national strategies have also focused on the ethical and human rights implications of AI. Like the Commission, many member states have established independent bodies tasked with advising on ethical issues raised by AI. These include Germanys Data Ethics Commission (which has published ethical guidelines on automated and connected driving and an opinion on AI ethics), the UKs Centre for Data Ethics and Innovation (CDEI), the UK governments Office for AI (which has published guidance on AI Ethics and Safety, guidelines for AI procurement, and public sector-specific guidance), and Frances National Consultative Committee for Ethics.

On 9 September 2021, the EUs recast of the Dual-Use Regulation entered into force. While export controls under the previous EU dual use regulation applied to certain AI-based products, such as those that use encryption software, and any AI products that are specifically designed for a military end use, the updated Dual-Use Regulation broadens the scope of the controls and implements more extensive requirements for cyber-surveillance related goods, software and technology, and military-related technical assistance activities. That said, while it is a response to new security risks and emerging technology, the new regulation still does not contain AI-specific requirements.

The GDPR applies to all processing of personal data, including in the context of AI systems. This means that AI systems trained on personal data, or processing personal data, falls within the scope of the GDPR. This imposes, among other things, requirements to be transparent about the processing, identify a legal basis for the processing, comply with data subject rights, keep personal data secure, and keep records to demonstrate compliance with the GDPR.

Notably, the GDPR includes specific requirements on fully automated decision-making (ADM) that has legal or similarly significant effects on individuals (article 22). This provision is likely to be particularly relevant to AI-based algorithmic decision-making processes. Under the GDPR, individuals have the right not to be subject to ADM unless the processing is based on the individuals explicit consent, is necessary for performance of a contract between the organisation and the individual, or is authorised by member state or EU law. Even when these conditions are met, organisations must provide individuals with meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing (article 13(2)(f)). Organisations carrying out ADM must also implement safeguards, including, at a minimum, the right to contest the decision and obtain human review of the decision (article 22(3)). Where sharing personal data between multiple organisations is required to develop or deploy an AI application, the usual rules in the GDPR that apply to data sharing apply. This includes ensuring that any joint controllers of the personal data set out their respective roles and responsibilities for compliance with the GDPR in a transparent way (article 26), and data processing agreements are put in place with processors (article 28). Any cross-border transfers of personal data from within the European Union to outside the EU will also be subject to the usual rules that apply to international data transfers (Chapter V). Further, the development and deployment of AI technologies in certain contexts may also trigger the requirement to carry out a mandatory data protection impact assessment (article 35), which will require organisations to carry out an in-depth review of their data protection compliance specific to the project.

A number of European data protection authorities (DPAs) have taken an interest in the application of the GDPR to AI. The UK Information Commissioners Office (ICO) has published guidance documents regarding the application of data protection principles to AI. Other DPAs, including the French CNIL and the Spanish AEPD, have issued guidance on AI and data protection.

As there is currently no AI-specific legislation in Europe, government authorities do not yet have the power to enforce and monitor compliance with AI-specific legislation.

However, to the extent that existing laws and regulations apply to AI applications, government authorities have been exercising their powers under these rules in relation to AI applications. As noted in question 5, a number of DPAs have been issuing AI-specific guidance in relation to data protection law compliance.

Further, a number of DPAs have recently taken enforcement actions focused on specific AI use cases, particularly relating to facial recognition technology (FRT) used for surveillance purposes. For example, the Swedish DPA in February 2021 fined the Swedish police for using FRT to identify individuals, and in August 2019 fined the Skellefte municipality for using FRT to track student attendance in a public school. Use of FRT systems by law enforcement for policing and security purposes was also the subject of a human rights challenge before the UK High Court (R (Bridges) v Chief Constable of South Wales Police [2019] WLR (D) 496 (UK)) and Court of Appeal (R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058), and resulted in the UK ICO issuing an opinion on the use of live FRT by law enforcement in public places. In November 2021, the UK ICO and the Office of the Australian Information Commissioner (OAIC) concluded their respective investigations of Clearview AIs facial recognition technologies. Although the ICO has not yet announced its decision, the OAIC has published its determination, which includes a declaration that Clearview AI is required not to repeat or continue practices found to have breached the Australian Privacy Act, and cease collection of and destroy all images collected in contravention of that Act. Since many AI applications involve the processing of personal data, we expect DPAs to play an important role in monitoring AI applications.

On a related note, on 6 October 2021, the EP voted in favour of a non-binding resolution banning the use of FRT by law enforcement in public spaces, which formed part of a non-legislative report on the use of AI by the police and judicial authorities in criminal matters. The EPs report could form the basis of additional EU regulation on the use of AI in law enforcement if the Commission submits a legislative proposal (which could become another AI-specific law within the EU).

The EU has been a thought leader in the international discourse on ethical frameworks for AI. The AI HLEGs 2019 AI Ethics Guidelines were, at the time, one of the most comprehensive examinations on AI ethics issued worldwide and involved a number of non-EU organisations and several government observers in its drafting. In parallel, the EU was closely involved in developing the OECDs ethical principles for AI and the Council of Europes recommendation on the human rights impacts of algorithmic systems. At the United Nations, the EU is involved in the report of the High-Level Panel on Digital Cooperation, including its recommendation on AI. The Commission recognises that AI can be a driving force to achieve the UN Sustainable Development Goals and advance the 2030 agenda. The Commission states in its 2020 AI White Paper that the EU will continue to cooperate with like-minded countries and global players on AI, based on an approach that promotes the respect of fundamental rights and European values. Also, article 39 of the Commissions proposed AI Act provides a mechanism for qualified bodies in third countries to carry out conformity assessments of AI systems under the Act.

On 1 September 2021, the Commission announced an international outreach for human-centric AI project (InTouchAI.eu) to promote the EUs vision on sustainable and trustworthy AI. The aim is to engage with international partners on regulatory and ethical matters and promote responsible development of trustworthy AI at a global level. This includes facilitating dialogue and joint initiatives with partners, conducting public outreach and technology diplomacy and conducting research, intelligence gathering and monitoring of AI developments. Also, at the first meeting of the US-EU Trade and Technology Council on 29 September 2021, the United States and EU affirmed their willingness and intention to develop AI systems that are innovative and trustworthy and that respect universal human rights and shared democratic values. The participants also established 10 working groups, one of which is tasked with addressing social scoring systems and to collaborate on projects furthering the development of trustworthy AI.

Further, on 3 November 2021, the Council of Europe published a recommendation on data protection in the context of profiling, which is defined as any form of automated processing of personal data, including machine learning systems, consisting in the use of data to evaluate certain personal aspects relating to an individual, particularly to analyse or predict that persons performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements. The recommendation encourages Council of Europe member states to promote and make legally binding the use of a privacy by design approach in the context of profiling, and sets out additional safeguards to protect personal data, the private life of individuals, and fundamental rights and freedoms such as human dignity, privacy, freedom of expression, non-discrimination, social justice, cultural diversity and democracy.

The UK is actively participating in the international discourse on norms and standards relating to AI. It continues to engage with the OECD, Council of Europe, United Nations, and the Global Partnership on AI (GPAI). The UKs National AI Strategy sets out the UKs ambition to create international AI standards to provide an agile and pro-innovation way to regulate AI technologies.

The most noteworthy AI-related developments in Europe have been the EUs proposed AI Act and the UKs National AI Strategy, discussed above.

Two areas that have seen notable growth in the use of AI-based products are FRT and digital health. The use of computer vision to power FRT systems for surveillance, identity verification and border control has been a notable development in the EU, raising a number of data protection law-related concerns, as discussed in the response to question 6. The use of other biometric identification systems, such as voice recognition technology, has also proliferated. Such technology can be seen in many forms from voice authentication systems for internet banking to smart speakers for home use. The digital health sector has seen an increase in AI-powered solutions, including apps that diagnose diseases, software tools for those with chronic diseases, platforms that facilitate communication between patients and healthcare providers, virtual or augmented reality tools that help administer healthcare, and research projects involving analysis of large data sets (eg, genomics data).

As discussed above, the European Commission has published a proposed AI Act. Additionally, the UK government is expected to publish a White Paper on regulating AI in early 2022.

Companies developing or deploying AI applications in the EU should be mindful that a number of laws and regulations may apply to their AI application including, but not limited to, those discussed in the preceding responses. Companies would be well advised to ensure compliance with these laws and look to government authorities that are responsible for enforcement in their sector for any sector-specific guidance on how these laws apply to AI applications. Companies should also closely monitor developments, including legislative proposals following the European Commissions proposed AI Act, and consider participating in the dialogue with policymakers on AI legislation to inform legislative efforts in this area.

At Covington, we take a holistic approach to AI that integrates our deep understanding of technology matters and our global and multi-disciplinary expertise. We have been working with clients on emerging technology matters for decades and we have helped clients navigate evolving legal landscapes, including at the dawn of cellular technology and the internet. We draw upon these past experiences as well as our deep understanding of technology and leverage our international and multi-disciplinary approach. We also translate this expertise into practical guidance that clients can apply in their transactions, public policy matters and business operations.

The development of AI technology is affecting virtually every industry and has tremendous potential to promote the public good, including to help achieve the UN Sustainable Development Goals by 2030. For example, in the healthcare sector, AI may continue to have an important role in helping to mitigate the effects of covid-19 and it has the potential to improve outcomes while reducing costs, including by aiding in diagnosis and policing drug theft and abuse. AI also has the potential to enable more efficient use of energy and other resources and to improve education, transportation, and the health and safety of workers. We are excited about the many great opportunities presented by AI.

AI has tremendous promise to advance economic and public good in many ways and it will be important to have policy frameworks that allow society to capitalise on these benefits and safeguard against potential harms. Also, as this publication explains, several jurisdictions are advancing different legal approaches with respect to AI. One of the great challenges is to develop harmonised policy approaches that achieve desired objectives. We have worked with stakeholders in the past to address these challenges with other technologies, such as the internet, and we are optimistic that workable approaches can be crafted for AI.

More here:
An interview with Covington & Burling discussing artificial intelligence in the European Union - Lexology

U.S. Customs and Border Protection Announces Details of Tariff-Rate Quotas for Steel and Aluminum Products From the European Union – JD Supra

U.S. Customs and Border Protection (CBP) recently announced details implementing the tariff-rate quota (TRQ) system that the United States and the European Union (EU) negotiated as a replacement for Section 232 national security tariffs on certain steel and aluminum products. The TRQ system went into effect on January 1, 2022.

The U.S. Department of Commerce (DOC) has established overall TRQ limits, by individual Harmonized Tariff Schedule (HTS) steel/aluminum group covered. The DOC has subdivided the overall TRQ into TRQ limits for merchandise produced by each EU country, based on historical U.S. import data. The TRQ limits are also divided by quarter.

Country-specific TRQs for steel products entered during the first and second quarters of 2022 are provided in CBP Publication No. 1628-1221: EU Sec. 232 Steel Tariff Rate Quota (TRQ) 2022 Q1 and Q2.

Country-specific TRQs for aluminum products entered during the first and second quarters of 2022 are provided in CBP Publication No. 1627-1221: EU Sec. 232 Aluminum Tariff Rate Quota (TRQ) 2022.

Please note that, within the EU country-specific and product-specific limits, CBP will administer the programs on a first-come, first-serve basis. If covered merchandise enters after the quota limit for the quarter has closed, the merchandise will be subject to the Section 232 tariffs. Also note that merchandise qualifying for a product exclusion will not be counted against the EU TRQ. Therefore, if your company has entered steel or aluminum in the past under a Section 232 tariff exclusion, you should continue to do so and renew your exclusion as necessary.

Additional details for the steel TRQ program are provided in the CBP bulletin, QB 22-801 2022: First and Second Quarter Tariff Rate Quota (TRQ) for Steel Mill Articles of European Union (EU) Member Countries.

Additional details for the aluminum TRQ program are provided in the CBP bulletin, QB 22-901 2022: First and Second Period Tariff Rate Quota (TRQ) for Aluminum Articles of European Union (EU) Member Countries.

Please also note that President Biden has directed the DOC to publish a notice seeking comments from interested parties on the Section 232 exclusion process. Following the comment period, the DOC will issue a regulation to revise the exclusion process as appropriate, including consideration of whether the availability and national security criteria for granting exclusions continue to be appropriate. We highly encourage companies that are utilizing the exclusion process to file comments to continue the exclusion on the basis of availability.

Here is the original post:
U.S. Customs and Border Protection Announces Details of Tariff-Rate Quotas for Steel and Aluminum Products From the European Union - JD Supra

European Commissions Proposal to End the Misuse of Shell Entities for Tax Purposes within the EU – JD Supra

BACKGROUND

On 22 December 2021, the European Commission presented a proposal for a new directive to fight against the misuses of shell entities for improper tax purposes.

This proposal has been issued to ensure that entities in the European Union that have no or minimal economic activity are unable to benefit from any tax advantages and do not place any financial burden on taxpayers.

The proposed new measures will establish transparency standards around the use of shell entities, so that their abuse can be detected by tax authorities in a more efficient way.

An entity falling into the scope of the provisions of this new directive will be required to report information in its tax return such as information in relation to the premises of the company, its bank accounts, the tax residency of its directors and that of its employees.

If a company is deemed a shell company because it fails the substance test, it will not be able to access tax relief and the benefits of the tax treaty network of its Member State and/or to qualify for the treatment under the Parent-Subsidiary and Interest and Royalties Directives. In addition, payments to third countries will not be treated as flowing through the shell entity and will be subject to withholding tax at the level of the entity paying the shell entity. Accordingly, inbound payments will be taxed in the state of the shells shareholder.

Once adopted by Member States, the proposal should come into force as of 1 January 2024.

More details will follow soon on the impact of such new directive on funds activity.

Read the rest here:
European Commissions Proposal to End the Misuse of Shell Entities for Tax Purposes within the EU - JD Supra

As the U.S. seeks to calm Russia tensions, Europe pushes to be included – CNBC

European Union foreign policy chief Josep Borrell and Ukrainian Foreign Minister Dmytro Kuleba visit the line of contact in Luhansk, Ukraine.

Anadolu Agency | Anadolu Agency | Getty Images

The United States and Russia are having key talks next week and the EU's top diplomat is disappointed that the bloc will not be around the table as well.

A potential Russian invasion of Ukraine is a top concern for many leaders, given multiple reports of heightened military activity close to the border. In a bid to ease these tensions, top U.S. and Russian officials will be gathering in Geneva, Switzerland on Monday. This meeting will precede wider talks between Russia and members of the North Atlantic Treaty Organization (NATO) on Wednesday.

However, the EU the political and economic group of 27 nations will not be present as a whole despite several of its members bordering with Russia.

"There is no security in Europe without the security of Ukraine. And it is clear that any discussion on European security must include European Union and Ukraine," Josep Borrell, the EU's high representative in charge of foreign affairs, said at a press conference on Wednesday.

"Any discussion about Ukraine must involve Ukraine first of all. And the talk about security in Europe cannot be done without not only the consultations, but the participation of the Europeans," Borrell said in Ukraine, where he visited the eastern part of the nation where low-scale military skirmishes between Ukrainian troops and pro-Russian forces have been going on for several years.

This marked the first time that the EU's top diplomat visited the conflict-hit region.

However, an analyst at consultancy firm Teneo, said that the exclusion of the EU from the talks is not surprising.

"The sidelining of the EU from the upcoming talks is hardly surprising, given that NATO, and particularly the U.S., serves as the main guarantor of security in CEE (Central and Eastern Europe)," Andrius Tursa, said Wednesday in a note.

In fact, the EU as a whole does not have a strong defense capacity it relies mostly on NATO, and to some extent on the U.S., when it comes to security.

But, regardless of its security capacities, there's a lot at stake for the EU in upcoming talks with Russia, including on the energy front.

The majority of natural gas going into Europe already comes from Russia. In 2020, this represented about 43% of the total gas imports to the bloc, according to Eurostat. And a key pipeline between Russia and Germany, Nord Stream 2, is hanging in the balance amid the ongoing tensions with the Kremlin this is a problem for Russia because it could be making more money from gas exports, and for the EU too because it could help containing some of the price increases registered in the last months.

Wolfgang Ischinger, former German ambassador to the U.S., told CNBC earlier this week that Nord Stream 2 is something that the EU can use to pressure Moscow.

"I think the pipeline represents a major item of leverage for us, if we handle it smartly," Ischinger, now chairman of the Munich Security Conference, told CNBC's Hadley Gamble.

Borrell's aim to be included in the talks with Russia comes almost a year after a "humiliating" trip to Russia.

The EU's top diplomat visited Moscow last February to voice the bloc's opposition against the arrest of Russian opposition politician Alexei Navalny. During the trip, Borrell was heavily criticized after failing to rebuff comments from his Russian counterpart that the EU was an "unreliable partner."

This took the EU-Russian relationship to a new low, according to political analysts.

However, concerns about a potential Russian invasion of Ukraine are complicating their relationship further.

"The conflict on the borders is on the verge of getting deeper and tensions have been building up with respect to the European security as a whole," Borrell said Wednesday.

It is estimated that about 100,000 Russian troops have been deployed to the country's border with Ukraine. Both countries have been at war since 2014 the year when Moscow annexed Crimea.

The Kremlin, for its part, has denied any plans to invade Ukraine.

However, Russia has demanded that NATO and the U.S. decrease their presence in eastern Europe and do not allow Ukraine to become a member of the military alliance.

One of the founding principles of NATO is that an attack against one of them is considered an attack against all.

Excerpt from:
As the U.S. seeks to calm Russia tensions, Europe pushes to be included - CNBC