Categories
Uncategorized

The EU Artificial Intelligence Act (AI Act), copyright and data protection: what it really means for companies and developers in Romania

This article summarises the risk-based approach of the AI Act and how it overlaps with copyright and data protection obligations. It then translates dense rules into concrete checklists for Romanian businesses and developers, from documentation and contracts to governance and impact assessments.

Regulation (EU) 2024/1689 on artificial intelligence, widely known as the AI Act, is the first comprehensive legal framework in the world dedicated specifically to artificial intelligence. It is an EU regulation, which means it applies directly and uniformly in all Member States, including Romania, without needing transposition. In practice, any Romanian company or software developer that creates, integrates or uses AI systems should already be asking a very concrete question: “Where do I fall under the AI Act and what obligations do I actually have?”At the same time, building and using AI models raises two highly sensitive legal issues:
  • copyright and other intellectual property rights, especially where models are trained on large volumes of text, images, code or databases;
  • personal data protection, where datasets include information about individuals and the General Data Protection Regulation (GDPR) continues to apply in full, even though there is now a dedicated AI regulation.

This article aims to explain, in clear language but with legal rigor:

  • how the AI Act entered into force at EU level and what the staggered application timeline looks like;
  • what practical obligations arise for companies and developers in Romania, depending on their role (provider, user / deployer, importer, distributor, provider of general-purpose AI models – GPAI);
  • what controversial issues are raised by training models on copyright-protected works and how the AI Act interacts with copyright rules;
  • how the AI Act combines with GDPR and national data protection law;
  • what litigation risks appear (copyright, privacy, liability for damage) and what realistic compliance strategies look like for a Romanian business;
  • where, in practice, the lawyer’s role begins, especially one who understands both technology and areas such as data protection and intellectual property.

Along the way, we will refer to the official text of the AI Act, the GDPR, the Romanian National Strategy on Artificial Intelligence 2024–2027, as well as to policy documents and guidance issued at EU and national level.

1. The European framework: what is the AI Act and from when does it apply?

1.1. What the AI Act is and why it matters for Romania

The AI Act lays down harmonised rules for the placing on the market, putting into service and use of AI systems in the European Union. The official text in English can be accessed on EUR-Lex at: Regulation (EU) 2024/1689 – Artificial Intelligence Act .

The Act is built on a risk-based approach:

  • some AI practices are prohibited outright (for example certain forms of social scoring, manipulative techniques, AI systems used for predictive policing based solely on personal traits, and intrusive real-time biometric identification in public spaces, subject to narrow exceptions);
  • other AI systems are categorised as high-risk – for instance AI used in critical infrastructure, recruitment, credit scoring, access to essential services, education, law enforcement or justice – and are subject to detailed compliance obligations;
  • a special regime applies to general-purpose AI models (GPAI), such as large language models and generative models for image, audio, video or code, which come with targeted transparency and documentation duties;
  • AI systems that pose limited or minimal risk remain largely unregulated, but often have transparency obligations (for example, informing users that they are interacting with AI).

For Romanian companies and developers, the AI Act matters because:

  • it does not apply only to “big tech”, but to anyone placing AI systems on the EU market or using them in the EU, including local start-ups, systems integrators and traditional businesses that are digitalising their processes;
  • it also applies if the provider is established outside the EU but makes the AI system available in the EU or its outputs are used in the EU. Romanian developers that sell globally may therefore benefit from a level playing field, but they also take on serious obligations;
  • it introduces very high administrative fines (up to tens of millions of euros or up to 7% of worldwide annual turnover, depending on the infringement), similar to or even higher than those under the GDPR. Compliance is, therefore, a core risk management issue, not just “paperwork for lawyers”.

1.2. Key dates: from entry into force to full application

From a timing perspective, several milestones are important and are confirmed in official EU sources:

  • 1 August 2024 – the AI Act entered into force in the EU (20 days after publication in the Official Journal). From this date, the Act formally exists in the EU legal order, but not all obligations are yet applicable.
  • 2 February 2025 – the first set of provisions started to apply, in particular the prohibitions on certain AI practices and some AI literacy provisions.
  • 2 August 2025 – the obligations for general-purpose AI models (GPAI) became applicable, including transparency and technical documentation duties for providers.
  • 2 August 2026 – the core of the AI Act becomes applicable to high-risk AI systems, including risk management, data governance, documentation, human oversight, robustness and cybersecurity requirements.
  • 2 August 2027 – extended transition for certain high-risk AI systems embedded in regulated products (so-called “legacy systems”) placed on the market before the general date of application, which must be brought into conformity if they are significantly modified.

For business planning, the message is clear: you do not have “until 2026” to think about the AI Act. Many obligations – particularly prohibitions, transparency and GPAI duties – have legal effects much earlier. The companies that start mapping their AI use and risk exposure now are the ones that will be able to negotiate better contracts and avoid last-minute panic in 2026–2027.

2. EU strategy and the Romanian National AI Strategy 2024–2027

2.1. The EU objective: innovation with safeguards

The AI Act does not sit in a vacuum. It is part of a broader EU digital regulatory package that also includes the Digital Services Act (DSA), the Digital Markets Act (DMA), the Data Act and a range of cybersecurity and data governance instruments. The declared goal is to foster “trustworthy AI” – AI that respects human dignity, fundamental rights, safety and the functioning of the EU internal market.

For entrepreneurs and developers in Romania, the message is double:

  • there is a clear regulatory push (fines, technical obligations, transparency and governance requirements);
  • but there is also a market opportunity: products that are “AI Act ready” will be more attractive to enterprise customers and public authorities that must themselves comply with EU rules and procurement standards.

2.2. Romania’s National AI Strategy 2024–2027

Romania has adopted a National Strategy on Artificial Intelligence for 2024–2027, available (in Romanian) via the Government’s Secretariat General and the Ministry of Research, Innovation and Digitalisation. The Strategy is aligned with the EU’s Coordinated Plan on AI and sets out objectives such as:

  • investing in AI research and development;
  • developing a digital ecosystem for AI (infrastructure, data, skills, innovation hubs);
  • creating a regulatory and governance framework compatible with the AI Act;
  • supporting the public administration in adopting AI solutions in a safe and efficient way.

For the private sector, the message is that AI adoption is not optional in the medium term and that EU-level requirements will become the de facto standard even in purely domestic contracts and procurement procedures.

2.3. Draft Romanian laws on AI and national authorities

In parallel with the direct applicability of the AI Act, Romania has started to discuss national legislation on AI, aimed at regulating responsible AI use in specific sectors and designating competent authorities for supervision and enforcement. By 2025, draft laws have been registered in Parliament proposing oversight structures and mechanisms for cooperation between regulators.

For companies, this means that in addition to the EU-level AI Office and European framework, there will be Romanian national authorities responsible for market surveillance and enforcement of the AI Act, similar to how the National Supervisory Authority for Personal Data Processing (ANSPDCP) enforces the GDPR.

3. Who has obligations under the AI Act? Providers, deployers, integrators, GPAI

3.1. Providers vs. users (deployers)

The AI Act recognises that not everyone in the AI value chain plays the same role. It distinguishes, in particular, between:

  • provider – the natural or legal person who develops or has an AI system developed and places it on the market or puts it into service under its own name or trademark; in practice, the developer of the model or application;
  • user / deployer – the natural or legal person using an AI system under its authority (for example a bank using AI-based credit scoring, a company using AI for recruitment, or a law firm using AI to summarise documents);
  • importer – when the AI system is developed outside the EU but is placed on the EU market;
  • distributor – any entity that makes AI systems available in the EU without being the provider or importer, such as resellers or integrators.

For a Romanian company, it is essential to clarify for each AI project: what is my role? For one product you may be the provider, for another only a deployer, and for a third you may be an integrator or distributor. The scope and intensity of obligations differ significantly by role.

3.2. General-purpose AI models (GPAI) and apps built on top of them

A special category introduced by the AI Act is that of general-purpose AI models (GPAI) – large models that can be used in a very wide range of applications (for example, large language models or multimodal models for text, image, audio, video or code generation).

The Act sets direct obligations for GPAI providers (technical documentation, transparency about capabilities and limitations, information about training data, cybersecurity measures, etc.), but this has a direct impact on the whole downstream chain:

  • if you are a Romanian start-up building an application on top of a GPAI API (chatbot, agent, document analysis tool), you will normally be classified as a deployer of AI, and sometimes part of a high-risk system depending on the use case;
  • contracts with GPAI providers will increasingly contain AI Act clauses: obligations to share information, to cooperate on incident reporting, to respect copyright and data protection law;
  • large customers (banks, insurers, public authorities) will ask Romanian suppliers to declare how they comply with the AI Act and GDPR, and lack of a minimum level of documentation may simply mean losing the tender.

4. AI Act and copyright: training models on protected works

4.1. What the AI Act says about copyright

The AI Act does not rewrite EU copyright law, but it makes it explicitly relevant for AI providers, especially for GPAI models. Among the key rules are:

  • providers of GPAI models must respect EU copyright and related rights law, including any opt-outs from text and data mining declared by rightsholders;
  • they must provide a “sufficiently detailed summary” of the content used for training, enabling rightsholders to understand, at least at category level, what types of works were included;
  • these obligations interact with existing EU rules on text and data mining and database rights, which already allow certain automated uses of content, but with important limits and opt-out mechanisms.

For Romanian companies and developers, this means that they can no longer ignore who owns the rights to the data used to train models, even when they rely on pre-trained models. In practice, we will see more and more:

  • contractual guarantees from GPAI providers that they have a legal basis to use the training data;
  • requests from authors, photographers, media organisations or platforms for detailed information about training data and, where appropriate, compensation;
  • strategic litigation aimed at clarifying whether certain forms of large-scale web scraping and use of databases are compatible with copyright and database rights.

4.2. Practical examples for a Romanian developer

Consider a few concrete scenarios:

  • you develop an AI model that generates marketing copy in Romanian, trained on online news articles, blogs and public social media posts;
  • you build an image recognition system for e-commerce, trained on product photos taken from various retailers’ websites;
  • you develop a code generation model, trained on public code repositories hosted on popular platforms.

In each of these scenarios, the key question is: do you have the right to use the content in this way? The answer depends on:

  • the licences under which the content is available (open source, Creative Commons, restrictive licences, bespoke T&Cs);
  • whether the terms and conditions of the websites or platforms explicitly prohibit AI training;
  • whether you can rely on a text and data mining exception and, if so, whether rightsholders have exercised their opt-out;
  • to what extent the model can reproduce verbatim or near-verbatim fragments of protected works, or otherwise affect the normal exploitation of those works.

From a risk-management perspective, it is prudent to:

  • work as much as possible with licensed datasets or content for which you have explicit agreements;
  • document your main data sources and the legal basis for using them;
  • implement technical measures that reduce memorisation and verbatim reproduction of protected fragments;
  • clarify in contracts with customers which party bears the legal risk in the event of copyright claims.

5. AI Act and data protection: how it fits together with GDPR

5.1. Two regimes, not one

A crucial point to understand is that the AI Act does not replace the GDPR. The AI Act focuses on the safety and fundamental rights implications of AI systems and on the functioning of the internal market, while the GDPR continues to apply fully whenever personal data are processed.

In practice, a Romanian company that develops or uses AI must answer two separate – but related – sets of questions:

  • Under the AI Act: is my AI system prohibited, high-risk, or in another category? What obligations do I have as a provider or user? Do I have risk management, data governance, technical documentation, human oversight and post-market monitoring in place?
  • Under the GDPR: am I processing personal data? For what purposes? What is my legal basis (consent, contract, legal obligation, legitimate interest)? Do I respect the principles of data minimisation, purpose limitation, transparency, storage limitation and security? Do I need a Data Protection Impact Assessment (DPIA)?

European data protection authorities and the European Data Protection Board (EDPB) have stressed that compliance with one regime does not automatically mean compliance with the other. Companies must therefore build an integrated view of both sets of obligations.

5.2. Training models on personal data

One of the most sensitive issues is training AI models on large datasets containing personal data: usage histories, logs, chat transcripts, images of individuals, biometric data, behavioural profiles and so on.

Under the GDPR, companies must carefully consider:

  • the specific purpose of training (for example improving the model, providing personalisation, fraud detection);
  • the legal basis relied upon (legitimate interest, performance of a contract, consent, legal obligation, vital interests etc.);
  • whether there are less intrusive alternatives that could achieve the same purpose;
  • for sensitive data (health, political opinions, religious beliefs, sexual orientation and others), whether a specific exception applies and under what safeguards;
  • how long the data are retained and whether genuine anonymisation (not just pseudonymisation) is feasible.

Data protection authorities consistently emphasise that:

  • organisations often overestimate when they can rely on legitimate interests as a legal basis for AI training;
  • “anonymisation” is frequently incomplete, leaving a real risk of re-identification in complex datasets;
  • the idea “we collected the data for one purpose, now we use them for any AI purpose” is incompatible with the purpose limitation principle.

In Romania, the competent authority is the National Supervisory Authority for Personal Data Processing (ANSPDCP) , which publishes guidance and decisions relevant for AI-related processing.

5.3. Data subject rights in the context of AI

Even with the new AI rules, individuals remain protected by the full set of data subject rights under the GDPR, including:

  • right to information – to know that their data are being used in AI systems;
  • right of access – to obtain information about the processing, including meaningful information about logic involved in automated decisions where relevant;
  • right to rectification and erasure – where data are inaccurate or no longer necessary;
  • right to restriction of processing and right to object – especially where legitimate interests are relied upon;
  • right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or similarly significantly affects them, unless additional safeguards are in place.

For companies, this means they must be able to explain in human-understandable terms what role AI plays in a given decision, what data were used and what options the individual has to challenge or seek human review of that decision.

6. Litigation risks for companies and developers in Romania

6.1. Copyright and intellectual property disputes

As generative AI becomes widespread, it was inevitable that copyright disputes would emerge around the world. While case law is still evolving, the trend is clear:

  • rightsholders (authors, visual artists, photographers, media organisations, platforms) are challenging the use of their works in training datasets without consent or a clear legal exception;
  • they are raising questions about whether models can output substantial reproductions of protected works, either verbatim or in a way that interferes with normal exploitation of the work;
  • courts are being asked to define due diligence standards for AI providers when it comes to training data and protection of rightsholders’ interests.

For a Romanian developer selling models or AI-enabled products internationally, this is not just a theoretical risk. Commercial customers are increasingly demanding contractual clauses by which the developer:

  • warrants that copyright and related rights are respected in the development and operation of the product;
  • indemnifies the customer against certain copyright claims related to the AI functionality;
  • accepts transparency obligations regarding training data sources or content filtering mechanisms.

6.2. Data protection investigations and fines

In parallel, European data protection authorities have already taken enforcement action against AI-based services where they considered that the processing of personal data for training or deployment lacked a valid legal basis or breached GDPR principles. Examples from other Member States show that:

  • absence of a clear legal basis for training on personal data can lead to substantial fines and orders to limit or suspend processing;
  • lack of transparency towards users (vague privacy notices, opaque terms of use) is viewed very critically;
  • issues around age verification and content filtering for minors are becoming a major enforcement priority.

In Romania, ANSPDCP can launch investigations both ex officio and following complaints by data subjects. For companies, the real risk is not only the fine itself, but also the possibility that a product or service may be suspended or restricted until compliance issues are solved.

6.3. Civil and commercial liability for AI-enabled decisions

Beyond regulatory enforcement, there is a growing risk of civil and commercial claims where a party alleges that an AI system generated an incorrect, discriminatory or negligent decision or recommendation.

Examples might include:

  • a rejected job applicant challenging an AI-based screening system and alleging discrimination;
  • a bank customer contesting an AI-assisted credit decision and claiming lack of transparency or systemic bias;
  • a corporate client alleging that an AI-powered document analysis tool produced misleading outputs that led to financial losses.

In such cases, courts will look at factors such as:

  • how the AI system was designed and validated (including human oversight and escalation processes);
  • what disclaimers and limitations were communicated to the customer and end users;
  • how the contract allocates liability and indemnification between the parties.

7. Compliance strategies for Romanian companies and developers

7.1. Inventory and classification of AI systems

The first sensible step is not to draft a policy, but to create an honest inventory of all AI usage in the organisation:

  • what AI systems do we develop internally (proprietary models, algorithms, integrated solutions)?
  • what products or services sold to customers contain AI components, even in a limited way (recommendation engines, scoring, clustering, anomaly detection)?
  • what AI tools do we use internally (generative AI for content, analytics, HR tools, security and monitoring, customer support bots)?

For each system identified, a high-level risk classification should be made:

  • high-risk – for example AI used in recruitment, credit scoring, access to essential services, education, certain health-related applications;
  • other categories – where we must check whether only transparency obligations apply, or whether, in context, the system falls into a regulated category.

Even if the final classification will need to be refined with expert help, this initial mapping is indispensable. It is impossible to manage AI risk if the organisation does not even know which systems use AI and in what way.

7.2. Documentation, governance and “audit trail”

The AI Act puts a strong emphasis on documentation and governance. For high-risk AI systems, companies must have:

  • a risk management system throughout the AI lifecycle;
  • data governance procedures (data quality, relevance, representativeness, error handling, bias monitoring);
  • adequate technical documentation to support conformity assessment and enable traceability;
  • mechanisms for human oversight and intervention;
  • post-market monitoring and incident management obligations.

Even for systems that are not classified as high-risk, it is good practice to build at least a minimal audit trail:

  • who decided to introduce AI for a given purpose and why;
  • what data were used and on what legal basis;
  • what testing was carried out before deployment;
  • how performance and side-effects are monitored over time.

7.3. Combining AI Act and GDPR: integrated DPIA and risk assessments

For AI projects involving personal data and with a potential significant impact on individuals (for example credit scoring, automated eligibility decisions, large-scale monitoring), companies should carry out a Data Protection Impact Assessment (DPIA) that is integrated with the AI risk assessment required under the AI Act.

In practice:

  • the DPIA addresses questions such as “who are the data subjects, what data are used, what risks exist for rights and freedoms, what safeguards are implemented?”;
  • the AI risk assessment addresses questions such as “how does the system fit into the AI Act risk categories, what technical requirements apply, what robustness and accuracy tests are performed?”

An integrated approach avoids the situation where the technical team “ticks the AI Act boxes” and the legal team “ticks the GDPR boxes” without the two perspectives ever meeting in a meaningful way.

7.4. Contracts, partnerships and the supplier chain

The AI Act and GDPR inevitably translate into more sophisticated contracts. In relationships between:

  • a GPAI provider and a Romanian application developer;
  • a Romanian developer and an enterprise client;
  • a Romanian company and non-EU AI service providers (cloud, APIs, specialised tools),

we will increasingly see clauses on:

  • allocation of responsibility for compliance with the AI Act and GDPR;
  • obligations to share information in the event of incidents or major model changes;
  • audit and access rights regarding technical or organisational measures;
  • financial liability caps and exclusions for different categories of risk (regulatory fines, IP claims, data breaches).

Romanian companies should avoid simply accepting any AI clause “as is”. Instead, they should realistically assess what they can commit to, what they can technically and organisationally deliver, and then align internal processes with the contractual promises already made.

8. The lawyer’s role: far beyond “translating” the regulation

In the AI Act context, the lawyer is not just someone who “translates” an EU regulation into Romanian. The role is much more hands-on and includes:

  • risk mapping – identifying AI projects in the organisation, classifying them by risk and prioritising interventions;
  • aligning AI Act, GDPR and IP rules – avoiding situations where solving one problem creates another (for instance, focusing only on AI Act technical requirements and overlooking legal basis for data or copyright issues);
  • contract design – drafting and negotiating contracts with suppliers, customers and partners in a way that reflects a realistic distribution of obligations and liabilities;
  • dealing with authorities – from informal clarifications and consultations to formal investigations, enforcement actions and litigation;
  • internal training – educating technical, product and management teams on legal requirements in a practical, non-dogmatic way.

For developers and companies working intensively with software, data and IP, collaboration with a lawyer who understands both technical language and areas such as data protection, IP, public law and even criminal law (for example in abuse or misuse scenarios) can be a real strategic asset.

On maglas.ro you will find additional articles on intellectual property, IT contracts and related litigation, which can complement this guide when AI intersects with copyright, trademarks, know-how and complex agreements with clients and suppliers.

9. Conclusions: from “we’ll see in 2026” to “what we do in the next 6 months”

The AI Act is not a regulation for 2026–2027 only. The first obligations are already in force and felt in the market, and the pressure from contracts, international clients and regulators means that the real moment to act is now.

For Romanian companies and developers, a minimal roadmap usually looks like this:

  • identify all projects and products that use AI, however indirectly;
  • classify the approximate risk level for each system (high-risk vs other categories);
  • review how personal data are used and how AI Act obligations combine with GDPR;
  • analyse whether there are copyright issues in training data or in the way the product operates;
  • build at least a basic governance and documentation framework, not just for “paper compliance” but to be able to explain, in case of control or litigation, how the AI system works and why it was considered acceptable;
  • involve a specialised lawyer at key moments: risk classification, high-impact projects, major contracts, interactions with authorities.

Ultimately, the question is not whether the AI Act, GDPR and copyright rules will become relevant for your business, but when and in what form. The earlier you clarify your position, the more control you have over how this future will look from both a legal and a business perspective.

Frequently asked questions (FAQ) on the AI Act, copyright and data protection for companies in Romania

1. From when does the AI Act actually apply to companies in Romania?

The AI Act entered into force on 1 August 2024, but its provisions apply in stages. The bans on certain AI practices and some AI literacy obligations started to apply on 2 February 2025. Obligations for general-purpose AI models (GPAI) took effect on 2 August 2025. The main compliance framework for high-risk AI systems becomes applicable on 2 August 2026, with an extended transition until 2 August 2027 for some legacy systems embedded in regulated products. For Romanian companies, this means that they must start preparing well before 2026 rather than waiting until the last minute.

2. If I only use services such as ChatGPT or other AI APIs, does the AI Act apply to me?

Yes, but in a different role. In most cases you will be classified as a user (deployer) of an AI system or as part of a chain of providers. You will not have all the technical obligations of the model provider, but you are responsible for how you use AI in your specific context: what data you input, how you inform individuals, whether decisions are fully automated, how you avoid discrimination and how you explain the role of AI to people affected.

3. Do I always need consent to use personal data for training a model?

Not always, but you always need a valid legal basis. In some cases, performance of a contract or legitimate interests may be used, in others explicit consent may be the only realistic option, especially for sensitive data. The point is that legitimate interests are not a magic solution for any AI processing. For high-risk projects it is strongly recommended to carry out a Data Protection Impact Assessment (DPIA) and to document why the chosen legal basis is appropriate.

4. What does the obligation to publish a “sufficiently detailed summary” of training data mean in practice?

Providers of general-purpose AI models must publish a summary of the training data that is detailed enough for rightsholders to understand, at least by categories, what kinds of content were used. They are not required to list every single file or URL, but vague statements such as “we used public data from the internet” are unlikely to be sufficient. In practice, summaries will describe types of sources (news media, forums, code repositories, licensed datasets), time periods, selection criteria and any mechanisms for excluding protected content or content where rightsholders exercised an opt-out.

5. If my software product uses scoring or classification algorithms, is it automatically high-risk?

No. Not every scoring algorithm is high-risk. The classification depends on the concrete purpose, impact on individuals and regulated sector. Systems used in recruitment, credit scoring, access to public services, education or certain health applications will often fall into high-risk categories and trigger additional requirements. Others may only be subject to transparency obligations. A case-by-case analysis is needed.

6. How do the AI Act and GDPR fit together – does complying with one help with the other?

They are complementary but not interchangeable. Meeting the technical and documentation requirements of the AI Act does not automatically ensure a valid legal basis for processing personal data under the GDPR, or compliance with principles such as data minimisation and purpose limitation. Conversely, a solid GDPR framework does not guarantee that your AI systems meet the specific requirements for high-risk AI. In practice, integrated assessments are needed, where technical and legal teams work together.

7. What are the potential fines for non-compliance with the AI Act?

The AI Act introduces very high administrative fines, in some cases up to tens of millions of euros or up to 7% of the company’s worldwide annual turnover, depending on the type of infringement and the size of the company. For smaller companies, fines of a few hundred thousand euros may already be existential. This is why compliance with the AI Act should be treated as a business survival and risk management issue, not a purely formal exercise.

8. When does it make sense to speak to a lawyer about the AI Act – isn’t reading the regulation enough?

Anyone can read the AI Act, and doing so is useful. However, what ultimately matters is how the Act applies to your concrete architecture of products, data and contracts. It usually makes sense to consult a lawyer when you are about to launch an AI product in a sensitive area (recruitment, credit, health, education), when a client proposes far-reaching liability clauses for AI use, when you receive questions or letters from an authority, or when you want to design an integrated AI Act + GDPR + IP strategy for your organisation instead of reacting ad hoc to isolated problems.

Exit mobile version