Refusal of application for a European Union trade mark
(Article 7 and Article 42(2) EUTMR)
Alicante, 08/10/2018
Angeliki Delicostopoulou
6, Sarantaporou Str.
GR-111 44 Athens
GRECIA
Application No: |
017873712 |
Your reference: |
V18034EM |
Trade mark: |
SOPHON
|
Mark type: |
Word mark |
Applicant: |
Sophon Technologies Limited Suite 301, F3, Building 25, Courtyard 1, Baosheng South Road, Haidian District Beijing REPÚBLICA POPULAR DE CHINA |
The Office raised an objection on 04/06/2018 pursuant to Article 7(1)(b) and (c) and Article 7(2) EUTMR because it found that the trade mark applied for is descriptive and devoid of any distinctive character, for the reasons set out in the attached letter.
The applicant submitted its observations on 23/07/2018, which may be summarised as follows.
1. ‘SOPHON’ has no meaning in English or Greek.
2. The mark is not descriptive
3. The mark is distinctive.
Pursuant to Article 94 EUTMR, it is up to the Office to take a decision based on reasons or evidence on which the applicant has had an opportunity to present its comments.
After giving due consideration to the applicant’s arguments, the Office has decided to maintain the objection.
Meaning of ‘SOPHON’
The applicant argues that the word ‘SOPHON’ has no meaning in English and that it has no meaning or associations in Greek either.
The Office argues not that the word ‘SOPHON’ has any meaning in English but that it is a transliteration of the Greek word ‘σοφός’, meaning wise. The objection therefore relates only to the Greek-speaking consumers in the European Union.
The Office concurs with the applicant that ‘σοφός’ is an adjective and that it is conjugated in accordance with the noun following it. However, the Office does not agree with the applicant that the transliteration of the word, namely ‘SOPHON’, would not be understood by the relevant Greek consumers. The relevant Greek consumers would understand ‘SOPHON’ as the transliteration of the Greek word for wise, regardless of the grammatical form. Whether the first or last syllable is stressed when the word is pronounced is therefore irrelevant. Transliteration is primarily concerned not with representing the sound of the original but, rather, with representing the characters, ideally accurately and unambiguously.
The applicant’s argument that the character ‘φ’ would not be transliterated as ‘ph’ is not correct. The Office acknowledges that there is a variant transliteration of the Greek word ‘σοφός’ that uses the letter ‘f’, but that does not exclude the possibility of a transliteration using the spelling ‘ph’, and nor can it be argued that the use of ‘ph’ for ‘φ’ is uncommon. Words of Greek origin, for example ‘τηλέφωνο’ (‘telephone’), ‘φωτογραφία’ (‘photograph’) and ‘φαρμακείο’ (‘pharmacy’), are often transliterated into Latin characters using ‘ph’.
Contrary to the applicant’s argument, the word ‘SOPHON’ does exist in written and spoken Greek and will be understood by the relevant Greek consumers.
Descriptiveness
Under Article 7(1)(c) EUTMR, ‘trade marks which consist exclusively of signs or indications which may serve, in trade, to designate the kind, quality, quantity, intended purpose, value, geographical origin or the time of production of the goods or of rendering of the service, or other characteristics of the goods or service’ are not to be registered.
It is settled case-law that each of the grounds for refusal to register listed in Article 7(1) EUTMR is independent and requires separate examination. Moreover, it is appropriate to interpret those grounds for refusal in the light of the general interest underlying each of them. The general interest to be taken into consideration must reflect different considerations according to the ground for refusal in question (16/09/2004, C‑329/02 P, SAT/2, EU:C:2004:532, § 25).
By prohibiting the registration as European Union trade marks of the signs and indications to which it refers, Article 7(1)(c) EUTMR
pursues an aim which is in the public interest, namely that descriptive signs or indications relating to the characteristics of goods or services in respect of which registration is sought may be freely used by all. That provision accordingly prevents such signs and indications from being reserved to one undertaking alone because they have been registered as trade marks.
(23/10/2003, C‑191/01 P, Doublemint, EU:C:2003:579, § 31).
‘The signs and indications referred to in Article 7(1)(c) [EUTMR] are those which may serve in normal usage from the point of view of the target public to designate, either directly or by reference to one of their essential characteristics, the goods or service in respect of which registration is sought’ (26/11/2003, T‑222/02, Robotunits, EU:T:2003:315, § 34).
The applicant argues that an adjective is always used together with a noun. However, a sign’s descriptiveness is assessed, first, by reference to the way in which it is understood by the relevant public and, second, by reference to the goods or services concerned (17/12/2015, T‑79/15, 3D, EU:T:2015:999, § 19 and the case-law cited).
The assessment of the mark is, therefore, based on how the relevant consumers would perceive the mark in relation to the goods and services for which registration is sought. When perceiving the mark ‘SOPHON’ in relation to the goods and services for which registration is sought, it will be immediately clear to the relevant consumer that the adjective ‘SOPHON’ relates to the product or service in question, regardless of the grammatical form of the word.
The applicant’s argument that an adjective such as ‘SOPHON’ cannot be used to denote a characteristic, attribute or quality of a product or service is incorrect. An example of an adjective commonly used to denote a characteristic, attribute or quality of a product or service is the word ‘smart’, which, inter alia, is used to describe something intelligent, the most widely known example being the ‘smart phone’.
The applicant argues that people, unlike goods and services, may be wise, as wise people have or show the ability to make good judgements, based on understanding and the experience of life. The goods in Class 9 to which the objection has been raised include humanised robots, hereunder the parts of which they consists, and artificial intelligence computer software. Artificial intelligence can be defined as ‘the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction.’ (1). ‘Strong [artificial intelligence] is [a] system with generalized human cognitive abilities so that when presented with an unfamiliar task, it has enough intelligence to find a solution’ (2). The relevant goods and services therefore, contrary to the applicant’s argument, are able to or relate to the ability to make judgements based on past experience and, on this basis, could be referred to as being wise. Furthermore, robots are able to demonstrate a level of self-awareness and have passed self-awareness tests that only humans have been able to pass before (3).
The humanised robots, hereunder the parts of which they consists, and artificial intelligence computer software may be used, for example, for surveillance and/or theft prevention. When encountering ‘SOPHON’ in relation to the goods in question, the relevant consumers will perceive the sign as indicating that the goods are wise/well informed, as if they had human intelligence. For example, it might be perceived as indicating that they are well informed in that they can recognise thieves as opposed to residents or welcome visitors based on past experience and/or programmed knowledge.
The Office therefore maintains that the sign ‘SOPHON’ is descriptive for the following goods in Class 9:
Class 9 Data processing apparatus; Smart cards [integrated circuit cards]; Integrated circuit cards [smart cards]; couplers [data processing equipment]; security surveillance robots; theft prevention installations, electric; radio monitor used for voice and signal reproduction; monitoring apparatus, other than for medical purposes; Video monitors; humanoid robots with artificial intelligence; chips [integrated circuits]; HD integrated graphics chip; Computer chips; Biochip sensors; electronic chips for integrated circuits; artificial intelligence computer software; computer software to operate physical robots; computer network and data servers; computers; Printed circuit boards.
Distinctiveness
Under Article 7(1)(b) EUTMR, ‘trade marks which are devoid of any distinctive character’ are not to be registered.
It is settled case-law that each of the grounds for refusal to register listed in Article 7(1) EUTMR is independent and requires separate examination. Moreover, it is appropriate to interpret those grounds for refusal in the light of the general interest underlying each of them. The general interest to be taken into consideration must reflect different considerations according to the ground for refusal in question (16/09/2004, C‑329/02 P, SAT/2, EU:C:2004:532, § 25).
The marks referred to in Article 7(1)(b) EUTMR are, in particular, those that do not enable the relevant public ‘to repeat the experience of a purchase, if it proves to be positive, or to avoid it, if it proves to be negative, on the occasion of a subsequent acquisition of the goods or services concerned’ (27/02/2002, T‑79/00, Lite, EU:T:2002:42, § 26). This is the case for, inter alia, signs commonly used in connection with the marketing of the goods or services concerned (15/09/2005, T‑320/03, Live richly, EU:T:2005:325, § 65).
Registration ‘of a trade mark which consists of signs or indications that are also used as advertising slogans, indications of quality or incitements to purchase the goods or services covered by that mark is not excluded as such by virtue of such use’ (04/10/2001, C‑517/99, Bravo, EU:C:2001:510, § 40). ‘Furthermore, it is not appropriate to apply to slogans criteria which are stricter than those applicable to other types of sign’ (11/12/2001, T‑138/00, Das Prinzip der Bequemlichkeit, EU:T:2001:286, § 44).
Although the criteria for assessing distinctiveness are the same for the various categories of marks, it may become apparent, in applying those criteria, that the relevant public’s perception is not necessarily the same for each of those categories and that, therefore, it may prove more difficult to establish distinctiveness for some categories of mark than for others (29/04/2004, C‑456/01 P & C‑457/01 P, Tabs, EU:C:2004:258, § 38).
Moreover, it is also settled case-law that the way in which the relevant public perceives a trade mark is influenced by its level of attention, which is likely to vary according to the category of goods or services in question (05/03/2003, T‑194/01, Soap device, EU:T:2003:53, § 42; and 03/12/2003, T‑305/02, Bottle, EU:T:2003:328, § 34).
A sign, such as a slogan, that fulfils functions other than that of a trade mark in the traditional sense of the term ‘is only distinctive for the purposes of Article 7(1)(b) EUTMR if it may be perceived immediately as an indication of the commercial origin of the goods or services in question, so as to enable the relevant public to distinguish, without any possibility of confusion, the goods or services of the owner of the mark from those of a different commercial origin’ (05/12/2002, T‑130/01, Real People, Real Solutions, EU:T:2002:301, § 20 ; and 03/07/2003, T‑122/01, Best Buy, EU:T:2003:183, § 21).
The Office also maintains that the relevant consumers would perceive ‘SOPHON’ as a promotional laudatory slogan, the function of which is to communicate a value statement. When encountering the mark in relation to the goods in Classes 9, 42 and 45 to which the objection has been raised, the relevant consumers would perceive the sign not as an indication of commercial origin but merely as highlighting positive aspects of the goods and services in question, namely that they are/are part of/provide/relate to the development of wise security/surveillance solutions, for example through the use of artificial intelligence technology, or that they constitute a wise choice for security/surveillance solutions.
The applicant argues that the sign ‘SOPHON’ would be perceived as an indication of commercial origin because it is part of its business name, namely ‘Sophon Technologies Limited’, and would thus be associated with the undertaking. However, the applicant provided no further information to demonstrate that ‘SOPHON’ has acquired distinctiveness through use and would be perceived by the relevant consumers as an indication of commercial origin.
Further proceedings
For the abovementioned reasons, and pursuant to Article 7(1)(b) and (c) and Article 7(2) EUTMR, the application for European Union trade mark No 17 873 712 is hereby rejected for the following goods and services:
Class 9 Calculators; data processing apparatus; monitors [computer programs]; Smart cards [integrated circuit cards]; Integrated circuit cards [smart cards]; computer peripheral devices; couplers [data processing equipment]; security surveillance robots; video screens; remote control apparatus; theft prevention installations, electric; radio monitor used for voice and signal reproduction; monitoring apparatus, other than for medical purposes; Video monitors; humanoid robots with artificial intelligence; chips [integrated circuits]; HD integrated graphics chip; Computer chips; Biochip sensors; electronic chips for integrated circuits; artificial intelligence computer software; computer software to operate physical robots; computer network and data servers; computers; Printed circuit boards; intelligent video analysis server.
Class 42 Cloud computing; computer security consultancy; internet security consultancy; data security consultancy; computer programming; rental of computer software; updating of computer software; maintenance of computer software; computer software design; technical research; research and development of new products for others; design and development of robots; Engineering services relating to robotics; software as a service [SaaS].
Class 45 Monitoring of burglar and security alarms; security screening of baggage; inspection of factories for safety purposes; tracking of stolen property; Monitoring of security systems; Surveillance services; flight passenger security service; rental of safety monitoring equipment; Security inspection services for others; consultancy services in the field of surveillance and security.
The application may proceed for the remaining services, namely:
Class 35 Advertising and publicity; on-line advertising on a computer network; market research by using computer database; Statistical evaluations of marketing data; compilation of information into computer databases; systemization of information into computer databases; business information; marketing research; marketing; personnel management consultancy; business management assistance; sales promotion for others.
According to Article 67 EUTMR, you have a right to appeal against this decision. According to Article 68 EUTMR, notice of appeal must be filed in writing at the Office within two months of the date of notification of this decision. It must be filed in the language of the proceedings in which the decision subject to appeal was taken. Furthermore, a written statement of the grounds of appeal must be filed within four months of the same date. The notice of appeal will be deemed to be filed only when the appeal fee of EUR 720 has been paid.
Anja Pernille LIGUNA
Annex I
AI (artificial intelligence) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision .
AI can be categorized in any number of ways, but here are two examples.
The first classifies AI systems as either weak AI or strong AI. Weak AI , also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants , such as Apple's Siri , are a form of weak AI.
Strong AI, also known as artificial general intelligence, is an AI system with generalized human cognitive abilities so that when presented with an unfamiliar task, it has enough intelligence to find a solution. The Turing Test , developed by mathematician Alan Turing in 1950, is a method used to determine if a computer can actually think like a human, although the method is controversial.
Alec Ross on AI and robotics
The second example comes from Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University. He categorizes AI into four types, from the kind of AI systems that exist today to sentient systems, which do not yet exist. His categories are as follows:
Type 1: Reactive machines. An example is Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep Blue can identify pieces on the chess board and make predictions, but it has no memory and cannot use past experiences to inform future ones. It analyzes possible moves -- its own and its opponent -- and chooses the most strategic move. Deep Blue and Google's AlphaGO were designed for narrow purposes and cannot easily be applied to another situation.
Type 2: Limited memory. These AI systems can use past experiences to inform future decisions. Some of the decision-making functions in self-driving cars are designed this way. Observations inform actions happening in the not-so-distant future, such as a car changing lanes. These observations are not stored permanently.
Type 3: Theory of mind. This psychology term refers to the understanding that others have their own beliefs, desires and intentions that impact the decisions they make. This kind of AI does not yet exist.
Type 4: Self-awareness. In this category, AI systems have a sense of self, have consciousness. Machines with self-awareness understand their current state and can use the information to infer what others are feeling. This type of AI does not yet exist.
What's the difference between AI and cognitive computing?
AI is incorporated into a variety of different types of technology. Here are seven examples.
Automation: What makes a system or process function automatically. For example, robotic process automation (RPA) can be programmed to perform high-volume, repeatable tasks that humans normally performed. RPA is different from IT automation in that it can adapt to changing circumstances.
Machine learning: The science of getting a computer to act without programming.Deep learning is a subset of machine learning that, in very simple terms, can be thought of as the automation of predictive analytics. There are three types of machine learning algorithms:
Supervised learning : Data sets are labeled so that patterns can be detected and used to label new data sets
Unsupervised learning : Data sets aren't labeled and are sorted according to similarities or differences
Reinforcement learning : Data sets aren't labeled but, after performing an action or several actions, the AI system is given feedback
Machine vision: The science of allowing computers to see. This technology captures and analyzes visual information using a camera, analog-to-digital conversion and digital signal processing. It is often compared to human eyesight, but machine vision isn't bound by biology and can be programmed to see through walls, for example. It is used in a range of applications from signature identification to medical image analysis. Computer vision, which is focused on machine-based image processing, is often conflated with machine vision.
Natural language processing (NLP): The processing of human -- and not computer -- language by a computer program. One of the older and best known examples of NLP is spam detection, which looks at the subject line and the text of an email and decides if it's junk. Current approaches to NLP are based on machine learning. NLP tasks include text translation, sentiment analysis and speech recognition.
Robotics: A field of engineering focused on the design and manufacturing of robots. Robots are often used to perform tasks that are difficult for humans to perform or perform consistently. They are used in assembly lines for car production or by NASA to move large objects in space. Researchers are also using machine learning to build robots that can interact in social settings.
Self-driving cars: These use a combination of computer vision, image recognition and deep learning to build automated skill at piloting a vehicle while staying in a given lane and avoiding unexpected obstructions, such as pedestrians.
Artificial intelligence has made its way into a number of areas. Here are six examples.
AI in healthcare. The biggest bets are on improving patient outcomes and reducing costs. Companies are applying machine learning to make better and faster diagnoses than humans. One of the best known healthcare technologies is IBM Watson . It understands natural language and is capable of responding to questions asked of it. The system mines patient data and other available data sources to form a hypothesis, which it then presents with a confidence scoring schema. Other AI applications include chatbots , a computer program used online to answer questions and assist customers, to help schedule follow-up appointments or aid patients through the billing process, and virtual health assistants that provide basic medical feedback.
AI in business. Robotic process automation is being applied to highly repetitive tasks normally performed by humans. Machine learning algorithms are being integrated into analytics and CRM platforms to uncover information on how to better serve customers. Chatbots have been incorporated into websites to provide immediate service to customers. Automation of job positions has also become a talking point among academics and IT analysts.
AI in education. AI can automate grading, giving educators more time. AI can assess students and adapt to their needs, helping them work at their own pace. AI tutors can provide additional support to students, ensuring they stay on track. AI could change where and how students learn, perhaps even replacing some teachers.
AI in finance. AI in personal finance applications, such as Mint or Turbo Tax, is disrupting financial institutions. Applications such as these collect personal data and provide financial advice. Other programs, such as IBM Watson, have been applied to the process of buying a home. Today, software performs much of the trading on Wall Street.
AI in law. The discovery process, sifting through of documents, in law is often overwhelming for humans. Automating this process is a more efficient use of time. Startups are also building question-and-answer computer assistants that can sift programmed-to-answer questions by examining the taxonomy and ontology associated with a database .
AI in manufacturing. This is an area that has been at the forefront of incorporating robots into the workflow . Industrial robots used to perform single tasks and were separated from human workers, but as the technology advanced that changed.
How
AI affects marketing operations
While AI tools present a range of new functionality for businesses, artificial intellignce also raises some ethical questions. Deep learning algorithms, which underpin many of the most advanced AI tools, only know what's in the data used during training. Most available data sets for training likely contain traces of human bias . This in turn can make the AI tools biased in their function. This has been seen in the Microsoft chatbot Tay, which learned a misogynistic and anti-Semitic vocabulary from Twitter users, and the Google Photo image classification tool that classified a group of African Americans as gorillas.
The application of AI in the realm of self-driving cars also raises ethical concerns. When an autonomous vehicle is involved in an accident, liability is unclear. Autonomous vehicles may also be put in a position where an accident is unavoidable, forcing it to make ethical decisions about how to minimize damage.
Another major concern is the potential for abuse of AI tools. Hackers are starting to use sophisticated machine learning tools to gain access to sensitive systems, complicating the issue of security beyond its current state.
Deep learning-based video and audio generation tools also present bad actors with the tools necessary to create so-called deepfakes , convincingly fabricated videos of public figures saying or doing things that never took place.
How data bias impacts AI outputs
Despite these potential risks, there are few regulations governing the use AI tools, and where laws do exist, the typically pertain to AI only indirectly. For example, federal Fair Lending regulations require financial institutions to explain credit decisions to potential customers, which limit the extent to which lenders can use deep learning algorithms, which by their nature are typically opaque. Europe's GDPR puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.
In 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered. Since that time the issue has received little attention from lawmakers.
John McCarthy, an American computer scientist, coined the term "artificial intelligence" in 1956 at the Dartmouth Conference where the discipline was born. Today, it is an umbrella term that encompasses everything from robotic process automation to actual robotics. It has gained prominence recently due, in part, to big data , or the increase in speed, size and variety of data businesses now collect. AI can perform tasks such as identifying patterns in data more efficiently than humans, enabling businesses to gain more insight from their data .
Source: https://searchenterpriseai.techtarget.com/definition/AI-Artificial-Intelligence
Annex II
Jul. 23, 2015, 2:15 PM
The
third robot stands up, realizing that it knows the answer to the
riddle.
Robots can staff eccentricJapanese
hotels , make logical
decisions by playing Minecraft
, and create trippy
images through Google .
Now the droids may have attained a new milestone by demonstrating a
level of self-awareness.
An experiment led by Professor Selmer Bringsjord of New York's Rensselaer Polytechnic Institute used the classic "wise men" logic puzzle to put a group of robots to the test.
The roboticists used a version of this riddle to see if a robot is able to distinguish itself from others.
Bringsjord and his research squad called the wise men riddle the "ultimate sifter" test because the knowledge game quickly separates people from machines -- only a person is able to pass the test.
But that is apparently no longer the case. In a demonstration to the press, Bringsjord showed that a robot passed the test.
The premise of the classic riddle presents three wise advisors to a king, wearing hats, each unseen to the wearer. The king informs his men of three facts: the contest is fair, their hats are either blue or white, and the first one to deduce the color on his head wins.
The contest would only be fair if all three men sported the same color hat. Therefore, the winning wise man would note that the color of the hats on the other two, and then guess that his was the same color.
The roboticists used a version of this riddle to prove self awareness -- all three robots were programmed to believe that two of them had been given a "dumbing pill" which would make them mute. Two robots were silenced. When asked which of them hadn't received the dumbing pill, only one was able to say "I don't know" out loud.
Upon hearing its own reply, the robot changed its answer, realizing that it was the one who hadn't received the pill.
To be able to claim that the robot is exhibiting "self-awareness", the robot must have understood the rules, recognized its own voice and been aware of the fact that it is a separate entity from the other robots. Researchers told Digital Trends that if nothing else, the robot's behavior is a "mathematically verifiable awareness of the self".
1 Information extracted from TechTarget on 08/10/2018 at https://searchenterpriseai.techtarget.com/definition/AI-Artificial-Intelligence (Annex I).
2 Information extracted from TechTarget on 08/10/2018 at https://searchenterpriseai.techtarget.com/definition/AI-Artificial-Intelligence (Annex I).
3Information extracted from the article ‘This robot passed a “self-awareness” test that only humans could handle until now’ from Business Insider on 08/10/2018 at https://www.businessinsider.com/this-robot-passed-a-self-awareness-test-that-only-humans-could-handle-until-now-2015-7?IR=T (Annex II).