Cita REgreta

vara bungas: kārtējā grēta turpina cīņu ar vējdzirnavām par autonomo, robotizēto kaujas platformu aizliegumu.  Ja viņai un Co izdosies mēs dabūsim Ottavas līguma  klonu, ko pildīs kreisi liberālie eiropieši (LV pirmajās rindās) un uz ko būs pajāt (kā vienmēr) RU, CN un US.

Nobel Peace laureate Jody Williams is helping lead a campaign for a new international treaty to ban killer weapons that can select targets and fire without decision-making by a human being.

avots

Galvenais defekts aktīvistu pozīcijā ir  doma, ka mašīna (neirotīklu algoritmi) izvēloties mērķi kaujas apstākļos  kļūdīsies biežāk par trīs diennaktis negulējušu Džonu vai Ivanu. Nav svarīgi, kas lido/brauc un šauj svarīgi, kas nosaka mērķa izvēles kritērijus. Cilvēka izvēles procesu var ietekmēt  dusmas, nogurums, aizspriedumi, stulbums, naids utml, mašīnas izvēli ietekmē  tikai ieprogramēts pretinieka profils un sensoru sniegtā informācija.  Paņēmi rokās ieroci, uzvilki formu – esi gatavs atbildēt, t.sk. mirt. Manā skatījumā pareizi konfigurētas roboplatformas samazinās, nevis palielinās collateral damage. Peace!

Bet, ko par šo problēmu domājam mēs (t.i. LV ārlietu guru, kas mūs pārstāv)?… a neko nedomājam. LV “molča”  pievienojusies ES, pilnīgi nekādai, nostājai, kas nepasaka  ne jā, ne nē, bet kopumā ļauj aktīvistiem turpināt jaukt gaisu. LT vēl nav formulējusi savu nostāju un  nav arī pievienojušies ES viedoklim. Savukārt igauņi, kā vienmēr…

izcēlumi mani

“ESTONIA
On characterisation of the systems under consideration, Estonia stated that “[a]n agreed set of
characteristics should not predetermine any future policy options” and that “there should first
be consensus on the most appropriate solution” adding that “policy should drive definitions”.68
Estonia considers an autonomous weapon system to be “any weapon system that can select
and engage targets without human intervention”,69 aligning itself with other states parties
and the ICRC. This definition is broader than the notion of LAWS and covers a spectrum
where the “boundaries of these categories remain blurry”.70 Estonia believes that the difficulty
“lies in deciding on a point on the spectrum where autonomy becomes legally and ethically
problematic” and that a “focus on human-machine interaction might be a more promising way
ahead”.71 Estonia stated that “autonomy relates to particular functions of the system, rather than
the system as a whole” and that it “is not an on/off phenomenon”.72 Estonia stated that the focus
should “be on increased autonomy in the critical functions of a weapon system, that is selecting
and engaging targets”, so any platform that relies “on a human operator to make real-time
targeting decisions, should fall outside the scope of our discussion”.73 Estonia added that
[l]ethality is (…) not a defining feature of any weapon system”.74
On the human element in the use of lethal force, Estonia stated that “human control and
judgement are essential in the use of force”, nonetheless stating that these are “flexible terms”.75
Estonia added that “a requirement of human control reflects existing international law” and
that “individuals who plan, decide upon and carry out attacks are duty-bound to comply”.76 It
stated that human control can be exercised in various ways: “not only by making real-time
targeting decisions”, but that “activities across the entire spectrum of touchpoints – including
design, testing, deployment, command and control – must cumulatively ensure human control
that is necessary for ensuring compliance with international humanitarian law”.77 It added that
“the nature and amount of human intervention required at each of these ‘touchpoints’” would
depend on “the capabilities of the weapon system and its intended use” and that “perhaps the
most critical ‘touchpoint’ is the decision to use the weapon system in conflict”.78 Estonia believes
that humans must retain ultimate control over decisions of life and death, “not only as a moral
and ethical imperative, but as a requirement that follows from international humanitarian law”.79
Estonia stated that “[h]uman operators can sometimes achieve greater control over force by
relinquishing some aspects of their ability to adjust the way in which the force is applied”,
mentioning precision guided munitions as an example.80
On the possible options for addressing the humanitarian and international security challenges,
Estonia believes it is important to consider a broad range of policy options given the diversity of
views. Estonia is not convinced “of the need for a new legally binding instrument”, adding that
it is not persuaded “that weapon systems with autonomous functions are inherently unlawful”
and that “they need to be assessed on a case-by-case basis”.81 Estonia sees merit in examining
three issues: “the manner in which existing principles and rules of international humanitarian law
apply and should be interpreted with respect to weapon systems with autonomous functions”,
“the unique challenges involved in legally reviewing such weapon systems and the way in which
these challenges could be addressed” and “the desirable nature of human-weapon interaction,
in particular, the activities to be undertaken at different stages of life cycle of a weapon so as to
ensure compliance with international humanitarian law”.82
ESTONIA AND FINLAND
Estonia and Finland produced a Working Paper entitled “Categorising lethal autonomous
weapons systems – a technical and legal perspective understanding LAWS”. In their paper,
they clarified several characteristics of machine autonomy. They distinguished automation
(which “as a concept means known, predictable pre-programmed responses in any situation in a
defined task”) from autonomy (which “should be understood as a capability to perform the given
task(s) in a self-sufficient and self-governing manner”) from independence (whereby “only true
independence (…) means that the system would be capable of defining and thereby deciding
the ultimate goals of its functioning”).83 They noted that the “distinction between automated and
autonomous functioning is not clear-cut.”84 They also focused on human-weapon interaction,
stating, inter alia, that “[h]umans must retain ultimate control over decisions of life and death”. 85
Adding that this “does not arise from a discrete rule of international law”, but that human control
over weapon systems is “an important and likely indispensable way for humans to ensure
that the use of violence complies with international law.” 86 The paper noted to “be meaningful,
human control does not necessarily have to be exercised contemporaneously with the delivery
of force.” 87 They mentioned “requirements of a military command chain emphasize the nature of
task execution; understanding of the time dimension (delay) and the dynamics of the situation
are crucial in the task definition and authorization of lethal force.” 88

avots

23 thoughts on “Cita REgreta

  1. Zem šīm “platdformām” nokļūst vairums moderno precīzo ieroču, no artilērijas šāviņiem ar patstāvīgiem mērķa atrašanas līdzekļiem līdz pretkuģu un gaisa kaujas raķetēm, kuras patstāvīgi atrod un identificē mērķi mikroskopiskā laika intervālā, kurā nekāda “cilvēka iejaukšanās” nav iespējama pat teorētiski.
    Tikpat labi var pieprasīt apstiprināt katra konkrēta mērķa iznīcināšanu tiesas sēdē.
    Un protams, ka RU būs galvenā gaudotāja, ka CITAS valstis izmanto “aizliegtos” ieročus.

  2. Gaisa spēki jau tagad ir robotu cīņas – piloti jau tikai lai C&C izpildītu
    Varētu padomāt, ka pilnībā cilvēka vadīta artilērija ir ļoti humāni – tupa bliež pa mērķa koordinātēm, kas ir kaut kur tālu aiz horizonta.
    iniciatīvas autori aizina atgriezties pie aukstajiem ieročiem?

    • Aizliegums būtu izdevīgs tām valstīm, kas ir labi ekipētas ar parastajiem ieročiem, bet atpaliek AI pētījumu jomā. Kuras tās gan varētu būt?

        • BY arī. Tāpat nosacīti otrā (un zemāku) tehnoloģiskās attīstības ešelona valstīm, BET tikai pie nosacījuma, ja šādu aizliegumu IEVĒRO visi.

          • Es starptautiskās sanāksmēs par MI tēmu vienmēr esmu teicis, ka jebkādiem ierobežojumiem ir jēga tikai tad, ja vismaz vadošo valstu trijnieks (ASV, Ķīna, Krievija) tos akceptē.

  3. 90% (spekulācija) no specnazveidīgajiem tāpat nav spējīgi pieņemt savus lēmumus. Viņi visi ir atsaldēti un garīgi sakropļoti. Atrauti no normālas (veselīgas un ilgtspējīgas) sabiedrības. AI ierīcēm vismaz nebūs veterānu problēmas.

  4. Vēl viens aspekts par labu AI ir psiholoģiskais tādā ziņā, ka AI izstrādā vairāki cilvēki un tā rezultāts ir kolektīvs lēmums. Pretēji snaipera individuālam lēmumam nogalināt.

      • Kaut kur lasīts – šaujam pa militāras nozīmes dzelžiem. Nevajag atrasties tiem tuvumā, lai izvairītos no nelaimes gadījumiem.

  5. LV ārpolitika ir LV iekšpolitikas turpinājums. Nekāda iekšpolitika, nekāda ārpolitika. Un tā ir politiķu/valdības atbildība. Ja ministru kabineta viedoklis šajā jautājumā ir “āāā ēēē mmm … nākamais jautājums”, tad pievienojamies leftardu-liberastu eirobirokrātu pamatstraumes viedoklim. ĀM nav vainojams

Atbildēt uz varabungas Atcelt atbildi

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Mainīt )

Google photo

You are commenting using your Google account. Log Out /  Mainīt )

Twitter picture

You are commenting using your Twitter account. Log Out /  Mainīt )

Facebook photo

You are commenting using your Facebook account. Log Out /  Mainīt )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.